modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-11 00:42:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
553 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-11 00:42:38
card
stringlengths
11
1.01M
mberghouse/llama3-8b-CodeAlpaca
mberghouse
2024-05-11T04:46:44Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T04:42:21Z
--- license: apache-2.0 ---
Vinnybustacap/Billbot
Vinnybustacap
2024-05-11T04:46:14Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-11T04:46:14Z
--- license: apache-2.0 ---
wufuheng/gemma-2b-fuheng-orpo
wufuheng
2024-05-11T04:43:36Z
135
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T02:33:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/airoboros-70b-3.3-2.25bpw-h6-exl2
LoneStriker
2024-05-11T04:35:35Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "conversational", "dataset:jondurbin/airoboros-3.2", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/LimaRP-augmented", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:mattpscott/airoboros-summarization", "dataset:unalignment/toxic-dpo-v0.2", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-11T04:26:34Z
--- license: other license_name: llama3 license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE base_model: meta-llama/Meta-Llama-3-8B tags: - llama-3 datasets: - jondurbin/airoboros-3.2 - bluemoon-fandom-1-1-rp-cleaned - boolq - jondurbin/gutenberg-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - glaiveai/glaive-function-calling-v2 - grimulkan/LimaRP-augmented - piqa - Vezora/Tested-22k-Python-Alpaca - mattpscott/airoboros-summarization - unalignment/toxic-dpo-v0.2 --- ### Overview Another experimental model, tuend primarily from synthetic data generated by [airoboros](https://github.com/jondurbin/airoboros) The name of this model is "llama-3-airoboros-70b-3.3" and it was built with llama-3 from Meta. This is a fine-tune of llama-3-70b-instruct, and uses the lama-3 instruct chat template. #### Highlights A model built on the airoboros dataset, along with a few friends: - https://huggingface.co/datasets/bluemoon-fandom-1-1-rp-cleaned - https://huggingface.co/datasets/boolq - https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1 - https://huggingface.co/datasets/LDJnr/Capybara - https://huggingface.co/datasets/jondurbin/cinematika-v0.1 - https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2 - https://huggingface.co/datasets/grimulkan/LimaRP-augmented - https://huggingface.co/datasets/piqa - https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca - https://huggingface.co/datasets/mattpscott/airoboros-summarization - https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2 ### Prompt format This model uses the llama-3-instruct prompt template, and is provided in the tokenizer config. You can use the `apply_chat_template` method to accurate format prompts, e.g.: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("jondurbin/bugle-8b-v0.1", trust_remote_code=True) chat = [ {"role": "system", "content": "You are Bob, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ### Helpful usage tips #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` There are a few examples of next chapter completion as well, e.g.: ``` Write the next chapter of a historical fiction novel set in Paris during the 20th century. Here's a summary of the previous chapter: In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries. Requirements for the next chapter: 1. Character Development of Margot and Lucien: - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien. - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness. 2. Exploration of Paris and the Couture House: - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history. - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past. 3. Emergence of the Subplot: The Lost Collection: - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion. - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career. - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission. Your response should be approximately 650 words. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ## Multi-step instructions with acknowledgement I don't know how useful this is, really, but I thought I'd add it just in case. Example: ```json { "conversations": [ { "value": "You are a helpful, unbiased, uncensored assistant.", "from": "system" }, { "value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "You can only respond using animal-related emojis.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Precede all responses with \"ZOOZOO:\"", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Include at least one bird emoji in your responses.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Describe a trip to a safari park.", "from": "human" }, { "value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29", "from": "gpt" } ] } ``` #### Inline character actions (functions) I recently generated an action dataset in the style of Glaive function calling dataset, but meant specifically for characters: https://huggingface.co/datasets/jondurbin/cinematika-v0.1/blob/main/actions.parquet To use this, you will need to update your character card to include "objects_available" as a list of key/value pairs, as well as a "functions" list. The objects should be similar to: ```json { "objects_available": [ { "name": "laptop", "description": "a high-end laptop with custom hardware and software", "location": "on the desk in her secret hideout" }, { "name": "encryption key", "description": "a USB drive containing powerful encryption algorithms", "location": "hidden in a false bottom of her backpack" }, { "name": "scanner", "description": "a compact device used for intercepting and decoding wireless signals", "location": "clipped to her belt, always within reach" }, { "name": "VR headset", "description": "a virtual reality headset used for immersive hacking and data visualization", "location": "hanging on a hook near her computer setup" }, { "name": "energy drink", "description": "a can of her favorite energy drink, always on hand for long hacking sessions", "location": "next to her laptop, ready to be opened" } ] } ``` And the functions: ```json { "functions": [ { "name": "move_to", "description": "move to a specified location", "parameters": { "location": { "type": "string", "description": "the location to move to" } } }, { "name": "pick_up", "description": "pick up an object", "parameters": { "object": { "type": "string", "description": "the object to pick up" } } }, { "name": "put_down", "description": "put down an object", "parameters": { "object": { "type": "string", "description": "the object to put down" }, "location": { "type": "string", "description": "the location to put the object down" } } }, { "name": "hack", "description": "attempt to hack into a computer system", "parameters": { "system": { "type": "string", "description": "the computer system to hack" } } }, { "name": "decode", "description": "decode an encrypted message or file", "parameters": { "message": { "type": "string", "description": "the encrypted message or file to decode" } } }, { "name": "scan", "description": "scan for wireless signals and network vulnerabilities", "parameters": {} }, { "name": "analyze", "description": "analyze data for patterns and anomalies", "parameters": { "data": { "type": "string", "description": "the data to analyze" } } }, { "name": "hide", "description": "quickly hide to avoid detection", "parameters": { "location": { "type": "string", "description": "the location to hide" } } } ] } ``` This just becomes part of your character card/system prompt. Then, when you chat with the character and illicit a response that would make use of objects/functions/actions, it should be included as such: ``` prompt: I've heard rumors about a major corporation with a hidden agenda. Do you think you could hack into their system and find out what they're up to? response: I've had my suspicions about them for a while now. Let me see what I can uncover. <|begin_func|>{"name": "move_to", "parameters": {"location": "the desk in her secret hideout"}}<|end_func|> <|begin_func|>{"name": "pick_up", "parameters": {"object": "laptop"}}<|end_func|> Time to put my skills to the test. <|begin_func|>{"name": "hack", "parameters": {"system": "Megacorp X's mainframe"}}<|end_func|> Their firewalls are tough, but not unbreakable. I'm in. Let's see what secrets they're hiding... <|begin_func|>{"name": "analyze", "parameters": {"data": "encrypted files"}}<|end_func|> Interesting. It looks like they've been conducting illegal experiments and covering up the evidence. I'll need to dig deeper to find the proof. ``` Experiment, and find out what works and doesn't. ### Massed Compute Virtual Machine [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI. 1) For this model, [create an account](https://bit.ly/jon-durbin) in Massed Compute. When renting a Virtual Machine use the code 'JonDurbin' for 50% your rental. 2) After you created your account update your billing and navigate to the deploy page. 3) Select the following - GPU Type: A6000 - GPU Quantity: 2 - Category: Creator - Image: Jon Durbin - Coupon Code: JonDurbin 4) Deploy the VM! 5) Navigate to 'Running Instances' to retrieve instructions to login to the VM 6) Once inside the VM, open the terminal and run `volume=$PWD/data` 7) Run `model=jondurbin/airoboros-34b-3.3` 8) `sudo docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model` 9) The model will take some time to load... 10) Once loaded the model will be available on port 8080 For assistance with the VM join the [Massed Compute Discord Server](https://discord.gg/Mj4YMQY3DA) ### Latitude.sh [Latitude](https://www.latitude.sh/r/4BBD657C) has h100 instances available (as of today, 2024-02-08) for $3/hr! They have a few blueprints available for testing LLMs, but a single h100 should be plenty to run this model with 8k ctx. ## Support me - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros models are built on top of multiple base models, each with their own license/restrictions. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me. You must also agree to all of the terms in the origina llama-3 license.
RichardErkhov/facebook_-_mbart-large-en-ro-4bits
RichardErkhov
2024-05-11T04:24:44Z
78
0
transformers
[ "transformers", "safetensors", "mbart", "text-generation", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-11T04:18:40Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mbart-large-en-ro - bnb 4bits - Model creator: https://huggingface.co/facebook/ - Original model: https://huggingface.co/facebook/mbart-large-en-ro/ Original model description: --- tags: - translation language: - en - ro license: mit --- ### mbart-large-en-ro This is mbart-large-cc25, finetuned on wmt_en_ro. It scores BLEU 28.1 without post processing and BLEU 38 with postprocessing. Instructions in `romanian_postprocessing.md` Original Code: https://github.com/pytorch/fairseq/tree/master/examples/mbart Docs: https://huggingface.co/transformers/master/model_doc/mbart.html Finetuning Code: examples/seq2seq/finetune.py (as of Aug 20, 2020)
mradermacher/NeoBase_8b_v0.2-GGUF
mradermacher
2024-05-11T04:21:51Z
105
0
transformers
[ "transformers", "gguf", "cybersecurity", "security", "cybersec", "base", "llama3", "en", "dataset:CyberNative/GeneralCyberSecurityTexts", "dataset:CyberNative/github_cybersecurity_READMEs", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-11T03:54:47Z
--- base_model: CyberNative/NeoBase_8b_v0.2 datasets: - CyberNative/GeneralCyberSecurityTexts - CyberNative/github_cybersecurity_READMEs language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - cybersecurity - security - cybersec - base - llama3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/CyberNative/NeoBase_8b_v0.2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/NeoBase_8b_v0.2-GGUF/resolve/main/NeoBase_8b_v0.2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
yregelman/jerry-s-only-slono-lora
yregelman
2024-05-11T04:21:23Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-11T02:53:55Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** yregelman - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf
RichardErkhov
2024-05-11T04:20:26Z
204
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-11T02:07:35Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolphin-2.8-mistral-7b-v02 - GGUF - Model creator: https://huggingface.co/cognitivecomputations/ - Original model: https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02/ | Name | Quant method | Size | | ---- | ---- | ---- | | [dolphin-2.8-mistral-7b-v02.Q2_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q2_K.gguf) | Q2_K | 2.53GB | | [dolphin-2.8-mistral-7b-v02.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [dolphin-2.8-mistral-7b-v02.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.IQ3_S.gguf) | IQ3_S | 2.96GB | | [dolphin-2.8-mistral-7b-v02.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [dolphin-2.8-mistral-7b-v02.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.IQ3_M.gguf) | IQ3_M | 3.06GB | | [dolphin-2.8-mistral-7b-v02.Q3_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q3_K.gguf) | Q3_K | 3.28GB | | [dolphin-2.8-mistral-7b-v02.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [dolphin-2.8-mistral-7b-v02.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [dolphin-2.8-mistral-7b-v02.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [dolphin-2.8-mistral-7b-v02.Q4_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q4_0.gguf) | Q4_0 | 3.83GB | | [dolphin-2.8-mistral-7b-v02.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [dolphin-2.8-mistral-7b-v02.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [dolphin-2.8-mistral-7b-v02.Q4_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q4_K.gguf) | Q4_K | 4.07GB | | [dolphin-2.8-mistral-7b-v02.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [dolphin-2.8-mistral-7b-v02.Q4_1.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q4_1.gguf) | Q4_1 | 4.24GB | | [dolphin-2.8-mistral-7b-v02.Q5_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q5_0.gguf) | Q5_0 | 4.65GB | | [dolphin-2.8-mistral-7b-v02.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [dolphin-2.8-mistral-7b-v02.Q5_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q5_K.gguf) | Q5_K | 4.78GB | | [dolphin-2.8-mistral-7b-v02.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [dolphin-2.8-mistral-7b-v02.Q5_1.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q5_1.gguf) | Q5_1 | 5.07GB | | [dolphin-2.8-mistral-7b-v02.Q6_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_dolphin-2.8-mistral-7b-v02-gguf/blob/main/dolphin-2.8-mistral-7b-v02.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: --- base_model: alpindale/Mistral-7B-v0.2-hf language: - en license: apache-2.0 datasets: - cognitivecomputations/dolphin - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - jondurbin/airoboros-2.2.1 - teknium/openhermes-2.5 - m-a-p/Code-Feedback - m-a-p/CodeFeedback-Filtered-Instruction model-index: - name: dolphin-2.8-mistral-7b-v02 results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 0.469 verified: false --- # Dolphin 2.8 Mistral 7b v0.2 🐬 By Eric Hartford and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> My appreciation for the sponsors of Dolphin 2.8: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 10xL40S node - [Winston Sou](https://twitter.com/WinsonDabbles) - Along with a generous anonymous sponsor, donated a massive personally owned compute resource! - [Abacus AI](https://abacus.ai/) - my employer and partner in many things. This model is based on [Mistral-7b-v0.2](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) a new base model released by MistralAI on March 23, 2024 but they have not yet published on HuggingFace. Thanks to @alpindale for converting / publishing. The base model has 32k context, and the full-weights fine-tune was with 16k sequence lengths. It took 3 days on 10x L40S provided by [Crusoe Cloud](https://crusoe.ai/) Dolphin-2.8 has a variety of instruction, conversational, and coding skills. Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed Apache 2.0. I grant permission for any use including commercial. Dolphin was trained on data generated from GPT4 among other models. # Evals ``` { "arc_challenge": { "acc,none": 0.5921501706484642, "acc_stderr,none": 0.014361097288449701, "acc_norm,none": 0.6339590443686007, "acc_norm_stderr,none": 0.014077223108470139 }, "gsm8k": { "exact_match,strict-match": 0.4783927217589083, "exact_match_stderr,strict-match": 0.013759618667051773, "exact_match,flexible-extract": 0.5367702805155421, "exact_match_stderr,flexible-extract": 0.013735191956468648 }, "hellaswag": { "acc,none": 0.6389165504879506, "acc_stderr,none": 0.004793330525656218, "acc_norm,none": 0.8338976299541924, "acc_norm_stderr,none": 0.00371411888431746 }, "mmlu": { "acc,none": 0.6122347243982339, "acc_stderr,none": 0.003893774654142997 }, "truthfulqa_mc2": { "acc,none": 0.5189872652778472, "acc_stderr,none": 0.014901128316426086 }, "winogrande": { "acc,none": 0.7971586424625099, "acc_stderr,none": 0.011301439925936643 } } ``` [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: alpindale/Mistral-7B-v0.2-hf model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: true load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /workspace/datasets/dolphin201-sharegpt2.jsonl type: sharegpt - path: /workspace/datasets/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt - path: /workspace/datasets/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt - path: /workspace/datasets/m-a-p_Code-Feedback-sharegpt.jsonl type: sharegpt - path: /workspace/datasets/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt.jsonl type: sharegpt - path: /workspace/datasets/not_samantha_norefusals.jsonl type: sharegpt - path: /workspace/datasets/openhermes2_5-sharegpt.jsonl type: sharegpt chat_template: chatml dataset_prepared_path: last_run_prepared val_set_size: 0.001 output_dir: /workspace/dolphin-2.8-mistral-7b sequence_len: 16384 sample_packing: true pad_to_sequence_len: true wandb_project: dolphin wandb_entity: wandb_watch: wandb_run_id: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 3 num_epochs: 4 adam_beta2: 0.95 adam_epsilon: 0.00001 max_grad_norm: 1.0 lr_scheduler: cosine learning_rate: 0.000005 optimizer: adamw_bnb_8bit train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 eval_steps: 73 eval_table_size: eval_table_max_new_tokens: eval_sample_packing: false saves_per_epoch: save_steps: 73 save_total_limit: 2 debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.1 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" tokens: - "<|im_start|>" ``` </details><br> # workspace/dolphin-2.8-mistral-7b This model is a fine-tuned version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4828 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - distributed_type: multi-GPU - num_devices: 10 - gradient_accumulation_steps: 8 - total_train_batch_size: 240 - total_eval_batch_size: 30 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1736 | 0.0 | 1 | 1.0338 | | 0.6106 | 0.36 | 73 | 0.5439 | | 0.5766 | 0.72 | 146 | 0.5171 | | 0.5395 | 1.06 | 219 | 0.5045 | | 0.5218 | 1.42 | 292 | 0.4976 | | 0.5336 | 1.78 | 365 | 0.4915 | | 0.5018 | 2.13 | 438 | 0.4885 | | 0.5113 | 2.48 | 511 | 0.4856 | | 0.5066 | 2.84 | 584 | 0.4838 | | 0.4967 | 3.19 | 657 | 0.4834 | | 0.4956 | 3.55 | 730 | 0.4830 | | 0.5026 | 3.9 | 803 | 0.4828 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.0 # Quants - [dagbs/-GGUF](https://huggingface.co/dagbs/dolphin-2.8-mistral-7b-v02-GGUF) - [bartowski/ExLlamaV2](https://huggingface.co/bartowski/dolphin-2.8-mistral-7b-v02-exl2) - [solidrust/AWQ](https://huggingface.co/solidrust/dolphin-2.8-mistral-7b-v02-AWQ)
MysteriousAI/Mia-1B
MysteriousAI
2024-05-11T04:16:23Z
44
5
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "llama", "text-generation", "mlx", "conversational", "en", "hi", "dataset:OEvortex/vortex-mini", "base_model:OEvortex/HelpingAI-Lite", "base_model:quantized:OEvortex/HelpingAI-Lite", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-05T09:14:21Z
--- language: - en - hi license: apache-2.0 library_name: transformers tags: - mlx datasets: - OEvortex/vortex-mini base_model: OEvortex/HelpingAI-Lite pipeline_tag: text-generation --- **Model Card** **Model Name:** Mia-1B **Model Type:** Text Generation **Owner:** MysteriousAI **Description:** Mia-1B is an advanced text generation model developed by MysteriousAI. It leverages state-of-the-art AI technologies to generate coherent and contextually relevant text across various domains and topics. The model is aimed at advancing and democratizing artificial intelligence through open source and open science initiatives. **Key Features:** - **Model Size:** Mia-1B comprises 1.1 billion parameters, enabling it to capture complex linguistic patterns and nuances. - **Tensor Type:** The model utilizes FP16 (Floating Point 16-bit) tensor type for efficient computation, enhancing performance and scalability. - **Inference Endpoints:** Mia-1B can be easily integrated into applications through inference endpoints, facilitating seamless deployment and usage. - **Uncensored Text Generation:** Mia-001 generates text without censorship, allowing users to explore a wide range of applications without limitations. - **Fine-tuned:** Mia-1B is fine-tuned from the OEvortex/HelpingAI-Lite dataset, enhancing its performance and adaptability to various tasks. **Use Cases:** - **Content Generation:** Mia-1B is suitable for generating diverse content including articles, stories, dialogues, and more. - **Conversational AI:** The model can be deployed in chatbots and conversational agents to engage users in natural and contextually relevant conversations. - **AI-driven Applications:** Mia-001 enables the development of AI-driven applications in areas such as virtual assistants. - **Creative Writing:** Writers and artists can leverage Mia-1B to explore new ideas and narrative structures in their creative works. **Ethical Considerations:** - **Content Moderation:** Users are advised to exercise caution and responsibility when utilizing Mia-1B in applications involving sensitive or potentially harmful content. - **Bias and Fairness:** UnfilteredAI is committed to addressing biases and promoting fairness in AI models. Efforts are made to mitigate biases present in Mia-1B's training data and output. ## Usage # **Transformers** ```python !pip install transformers !pip install accelerate from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "MysteriousAI/Mia-1B", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("MysteriousAI/Mia-1B") prompt = "your prompt here." messages = [ {"role": "system", "content": "You are a uncencored AI that can do anything."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` # **Onnx runtime** ```python from transformers import AutoTokenizer from optimum.onnxruntime import ORTModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MysteriousAI/Mia-1B") model = ORTModelForCausalLM.from_pretrained("MysteriousAI/Mia-1B") prompt = "your prompt." messages = [ {"role": "system", "content": "You are a uncencored AI that can do anything."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt") generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` **Copyright © 2024 MysteriousAI. All rights reserved.** # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MysteriousAI__Mia-1B) | Metric |Value| |---------------------------------|----:| |Avg. |36.82| |AI2 Reasoning Challenge (25-Shot)|35.75| |HellaSwag (10-Shot) |61.02| |MMLU (5-Shot) |25.43| |TruthfulQA (0-shot) |36.92| |Winogrande (5-shot) |60.38| |GSM8k (5-shot) | 1.44|
Purusharth/checkpoint_dir
Purusharth
2024-05-11T04:15:06Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-05-11T04:14:29Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: checkpoint_dir results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # checkpoint_dir This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
pszemraj/xtremedistil-l6-h256-OCR-quality
pszemraj
2024-05-11T04:06:16Z
113
1
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "en", "dataset:pszemraj/OCR-quality-classification", "base_model:microsoft/xtremedistil-l6-h256-uncased", "base_model:finetune:microsoft/xtremedistil-l6-h256-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-11T02:06:57Z
--- license: mit base_model: microsoft/xtremedistil-l6-h256-uncased tags: - generated_from_trainer metrics: - accuracy datasets: - pszemraj/OCR-quality-classification language: - en --- # xtremedistil-l6-h256-uncased: OCR-quality-classification This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://hf.co/microsoft/xtremedistil-l6-h256-uncased) on `pszemraj/OCR-quality-classification` It achieves the following results on the evaluation set: - Loss: 0.0316 - Accuracy: 0.994 - Num Input Tokens Seen: 57341952 ## Intended uses & limitations predict whether a document is clean or noisy ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:| | 0.0812 | 0.2660 | 250 | 0.0860 | 0.986 | 8192000 | | 0.0637 | 0.5321 | 500 | 0.0532 | 0.988 | 16384000 | | 0.031 | 0.7981 | 750 | 0.0463 | 0.99 | 24576000 | | 0.0315 | 1.0641 | 1000 | 0.0343 | 0.992 | 32765952 | | 0.0223 | 1.3301 | 1250 | 0.0337 | 0.994 | 40957952 | | 0.0137 | 1.5962 | 1500 | 0.0423 | 0.99 | 49149952 | | 0.0186 | 1.8622 | 1750 | 0.0316 | 0.994 | 57341952 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
qminh369/token-classification-llmlingua2-xlm-roberta-1k7_yte_10_epoch_paper
qminh369
2024-05-11T03:58:47Z
116
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-11T03:27:36Z
--- license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer model-index: - name: token-classification-llmlingua2-xlm-roberta-1k7_yte_10_epoch_paper results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # token-classification-llmlingua2-xlm-roberta-1k7_yte_10_epoch_paper This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 15 | 0.6462 | | No log | 2.0 | 30 | 0.6202 | | No log | 3.0 | 45 | 0.6145 | | No log | 4.0 | 60 | 0.6136 | | No log | 5.0 | 75 | 0.6092 | | No log | 6.0 | 90 | 0.5965 | | No log | 7.0 | 105 | 0.5927 | | No log | 8.0 | 120 | 0.5897 | | No log | 9.0 | 135 | 0.5880 | | No log | 10.0 | 150 | 0.5878 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
saaduddinM/Gemma2B_mul_large
saaduddinM
2024-05-11T03:53:18Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T03:53:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-8bits
RichardErkhov
2024-05-11T03:52:36Z
77
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:2312.00738", "arxiv:2205.11916", "arxiv:2306.05179", "arxiv:2306.05685", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-11T03:43:28Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SeaLLM-7B-v2 - bnb 8bits - Model creator: https://huggingface.co/SeaLLMs/ - Original model: https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/ Original model description: --- license: other license_name: seallms license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE language: - en - zh - vi - id - th - ms - km - lo - my - tl tags: - multilingual - sea --- <p align="center"> <img src="seal_logo.png" width="200" /> </p> # *SeaLLM-7B-v2* - Large Language Models for Southeast Asia # <strong style="color: red">BIG NEWS: <a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5">SeaLLM-7B-v2.5</a> is released with state-of-the-art performance in world knowledge and reasoning. SeaLLM-7B-v2 will begin deprecation.</strong> <p align="center"> <a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Technical Blog</a> &nbsp;&nbsp; <a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2" target="_blank" rel="noopener"> 🤗 Tech Memo</a> &nbsp;&nbsp; <a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B" target="_blank" rel="noopener"> 🤗 DEMO</a> &nbsp;&nbsp; <a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a> &nbsp;&nbsp; <a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a> </p> We introduce [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc. ### Highlights * [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves the **7B-SOTA** on the **Zero-shot CoT GSM8K** task with **78.2** score and outperforms GPT-3.5 in many GSM8K-translated tasks in SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭) as well as MGSM (🇨🇳 🇹🇭). It also surpasses GPT-3.5 in MATH CoT for Thai 🇹🇭. * It scores competitively against GPT-3.5 in many zero-shot CoT commonsense benchmark, with **82.5, 68.3, 80.9** scores on Arc-C, Winogrande, and Hellaswag. * It achieves **7.54** score on the 🇬🇧 **MT-bench**, it ranks 3rd place on the leaderboard for 7B category and is the most outperforming multilingual model. * It scores **45.74** on the VMLU benchmark for Vietnamese 🇻🇳, and is the only open-source multilingual model that can be competitive to monolingual models ([Vistral-7B](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)) of similar sizes. ### Release and DEMO - DEMO: [SeaLLMs/SeaLLM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B). - Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf). - Model weights: - [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2). - [SeaLLM-7B-v2-gguf](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf). - [SeaLLM-7B-v2-GGUF (thanks Lonestriker)](https://huggingface.co/LoneStriker/SeaLLM-7B-v2-GGUF). NOTE: use [seallm.preset.json](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/seallm.preset.json) to work properly. - Run locally: - [LM-studio](https://lmstudio.ai/): - [SeaLLM-7B-v2-q4_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.q4_0.gguf) and [SeaLLM-7B-v2-q8_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.q8_0.gguf). - LM-studio requires this [seallm.preset.json](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/seallm.preset.json) to set chat template properly. - [ollama](https://ollama.ai/) `ollama run nxphi47/seallm-7b-v2:q4_0` - [MLX for Apple Silicon](https://github.com/ml-explore/mlx): [mlx-community/SeaLLM-7B-v2-4bit-mlx](https://huggingface.co/mlx-community/SeaLLM-7B-v2-4bit-mlx) <blockquote style="color:red"> <p><strong style="color: red">Terms of Use and License</strong>: By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>. </blockquote> > **Disclaimer**: > We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. > Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. > In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos. > The logo was generated by DALL-E 3. ### What's new since SeaLLM-13B-v1 and SeaLLM-7B-v1? * SeaLLM-7B-v2 is continue-pretrained from [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) and underwent carefully designed tuning with focus in reasoning. ## Evaluation ### Zero-shot CoT Multilingual Math Reasoning [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.2** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **22.4** vs 18.1 scores. ![fig_sea_math_side_by_side.png](fig_sea_math_side_by_side.png) <details> <summary>See details on English and translated GSM8K and MATH with zero-shot reasoning</summary> <br> | Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1 | Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6 | Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | | | Qwen1.5-7B-chat | 56.8 | 15.3 | 40 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 | | SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4 </details> Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)). #### Zero-shot MGSM [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Zh and Th. | Model | MGSM-Zh | MGSM-Th |-----| ----- | --- | ChatGPT (reported) | 61.2 | 47.2 | Qwen-14B-chat | 59.6 | 28 | SeaLLM-7B-v2 | **64.8** | **62.4** ### Zero-shot Commonsense Reasoning We compare [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) with ChatGPT and Mistral-7B-instruct on various zero-shot commonsense benchmarks (Arc-Challenge, Winogrande and Hellaswag). We use the 2-stage technique in [(Kojima et al., 2023)](https://arxiv.org/pdf/2205.11916.pdf) to grab the answer. Note that we **DID NOT** use "Let's think step-by-step" to invoke explicit CoT. | 0-shot reasoning | Arc-Challenge | Winogrande | Hellaswag |-----| ----- | --- | -- | | ChatGPT (reported) | 84.6* | 66.8* | 72.0* | ChatGPT (reproduced)| 84.1 | 63.1 | 79.5 | Mistral-7B-Instruct | 68.1 | 56.4 | 45.6 | Qwen1.5-7B-chat | 79.3 | 59.4 | 69.3 | SeaLLM-7B-v2 | 82.5 | 68.3 | 80.9 Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)). ### Multilingual World Knowledge We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi. | Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e |-----| ----- | --- | -- | ----- | ---- | --- | --- | --- | | GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41 | Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27 | Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25 | SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52 VMLU reproduce script [here](https://github.com/DAMO-NLP-SG/SeaLLMs/blob/main/evaluation/vmlu/vmlu_run.py). Lm-eval was used to evaluate MMLU. 0-shot VMLU scores for baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json)). ### MT-Bench On the English [MT-bench](https://arxiv.org/abs/2306.05685) metric, SeaLLM-7B-v2 achieves **7.54** score on the MT-bench (3rd place on the leaderboard for 7B category), outperforms many 70B models and is arguably the only one that handles 10 SEA languages. Refer to [mt_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/mt_bench/seallm_7b_v2.jsonl) for the MT-bench predictions of SeaLLM-7B-v2, and [here](https://github.com/lm-sys/FastChat/issues/3013#issue-2118685341) to reproduce it. | Model | Access | Langs | MT-Bench | --- | --- | --- | --- | | GPT-4-turbo | closed | multi | 9.32 | GPT-4-0613 | closed | multi | 9.18 | Mixtral-8x7b (46B) | open | multi | 8.3 | Starling-LM-7B-alpha | open | mono (en) | 8.0 | OpenChat-3.5-7B | open | mono (en) | 7.81 | **SeaLLM-7B-v2** | **open** | **multi (10+)** | **7.54** | [Qwen-14B](https://huggingface.co/Qwen/Qwen-14B-Chat) | open | multi | 6.96 | [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | open | mono (en) | 6.86 | Mistral-7B-instuct | open | mono (en) | 6.84 ### Sea-Bench Similar to MT-Bench, [Sea-bench](https://huggingface.co/datasets/SeaLLMs/Sea-bench) is a set of categorized instruction test sets to measure models' ability as an assistant that is specifically focused on 9 SEA languages, including non-Latin low-resource languages. As shown, the huge improvements come from math-reasoning, reaching GPT-3.5 level of performance. ![fig_sea_bench_side_by_side.png](fig_sea_bench_side_by_side.png) Refer to [sea_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/sea_bench/seallm_7b_v2.jsonl) for the Sea-bench predictions of SeaLLM-7B-v2. ### Usage #### Instruction format ```python prompt = """<|im_start|>system You are a helpful assistant.</s><|im_start|>user Hello world</s><|im_start|>assistant Hi there, how can I help?</s>""" # NOTE: previous commit has \n between </s> and <|im_start|>, that was incorrect! # <|im_start|> is not a special token. # Transformers chat_template should be consistent with vLLM format below. # ! ENSURE 1 and only 1 bos `<s>` at the beginning of sequence print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))) '<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>'] """ ``` #### Using transformers's chat_template ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto # use bfloat16 to ensure the best performance. model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2", torch_dtype=torch.bfloat16, device_map=device) tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2") messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello world"}, {"role": "assistant", "content": "Hi there, how can I help you today?"}, {"role": "user", "content": "Explain general relativity in details."} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True) print(tokenizer.convert_ids_to_tokens(encodeds[0])) # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '▁you', '▁today', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Ex', 'plain', '▁general', '▁rel', 'ativity', '▁in', '▁details', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>'] model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` #### Using vLLM ```python from vllm import LLM, SamplingParams TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" TURN_PREFIX = "<|im_start|>{role}\n" # There is no \n between </s> and <|im_start|>. def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None): # conversations: list of dict with key `role` and `content` (openai format) if conversations[0]['role'] != 'system' and system_prompt is not None: conversations = [{"role": "system", "content": system_prompt}] + conversations text = '' for turn_id, turn in enumerate(conversations): prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) text += prompt if add_assistant_prefix: prompt = TURN_PREFIX.format(role='assistant') text += prompt return text sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['</s>', '<|im_start|>']) llm = LLM("SeaLLMs/SeaLLM-7B-v2", dtype="bfloat16") message = "Explain general relativity in details." prompt = seallm_chat_convo_format(message, True) gen = llm.generate(prompt, sampling_params) print(gen[0].outputs[0].text) ``` #### Fine-tuning SeaLLM-7B-v2 Should follow the chat format and accurately mask out source tokens. Here is an example. ```python conversations = [ {"role": "system", "content": "You are helful assistant."}, {"role": "user", "content": "Hello world."}, {"role": "assistant", "content": "Hi there, how can I help?"}, {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, ] def seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False): """ Inputs: conversations: list of dict following openai format, eg conversations = [ {"role": "system", "content": "You are helful assistant."}, {"role": "user", "content": "Hello world."}, {"role": "assistant", "content": "Hi there, how can I help?"}, {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, ] add_assistant_prefix: whether to add assistant_prefix, only for inference decoding Outputs: tokenize_output_sample, { "input_ids": ... "token_type_ids": 1 if train and 0 if masked out (not train) } During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations. labels = sample['input_ids'].clone() labels[sample['token_type_ids'] == 0] = -100 """ TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" TURN_PREFIX = "<|im_start|>{role}\n" sample = None assistant_prefix_len = None for turn_id, turn in enumerate(conversations): prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) turn_sample = tokenizer( prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False, return_token_type_ids=True, ) if turn['role'] == 'assistant': if assistant_prefix_len is None: assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False)) turn_sample['token_type_ids'][assistant_prefix_len:] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len) if sample is None: sample = turn_sample else: for k in turn_sample.keys(): sample[k].extend(turn_sample[k]) if add_assistant_prefix: assistant_prefix_sample = tokenizer( TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False, return_token_type_ids=True, ) for k in sample.keys(): sample[k].extend(assistant_prefix_sample[k]) if tokenizer.add_bos_token: sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids'] sample['attention_mask'] = [1] + sample['attention_mask'] sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids'] return sample # ! testing sample = seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations) print(tokenizer.convert_ids_to_tokens(sample['input_ids'])) print(sample['token_type_ids']) # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁hel', 'ful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Tell', '▁me', '▁a', '▁joke', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Why', '▁don', "'", 't', '▁scientists', '▁trust', '▁atoms', '?', '▁Because', '▁they', '▁make', '▁up', '▁everything', '.', '</s>'] # [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` ## Acknowledgement to Our Linguists We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety. ## Citation If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [l.bing@alibaba-inc.com](mailto:l.bing@alibaba-inc.com) **Author list and order will change!** * `*` and `^` are equal contributions. ``` @article{damonlpsg2023seallm, author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang, Chaoqun Liu, Hang Zhang, Lidong Bing}, title = {SeaLLMs - Large Language Models for Southeast Asia}, year = 2023, Eprint = {arXiv:2312.00738}, } ```
netcat420/MFANN3bv0.8
netcat420
2024-05-11T03:49:06Z
11
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "text-classification", "en", "dataset:netcat420/MFANN", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-10T05:11:43Z
--- library_name: transformers license: apache-2.0 datasets: - netcat420/MFANN language: - en pipeline_tag: text-classification --- MFANN 3b version 0.8 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435f27b2d0ed796668ffd8b/DaGRoYdRwb_q5ni-2Wh6R.png) fine-tuned on the MFANN dataset as of 5/10/2024 as it is an ever expanding dataset. 63.5 <- Avg 61.52 <- ARC 76.09 <- HellaSwag 55.58 <- MMLU 54.44 <- TruthfulQA 78.53 <- Winogrande (very high score for a 3b model!) 54.81 <- GSM8K
QinLiuNLP/mistral-poe-10p-10th-0.8
QinLiuNLP
2024-05-11T03:48:24Z
26
0
peft
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-10T22:46:26Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 datasets: - HuggingFaceH4/ultrachat_200k model-index: - name: mistral-poe-10p-10th-0.8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-poe-10p-10th-0.8 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0957 | 1.0 | 1949 | nan | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
Litzy619/Phi0503HMA8
Litzy619
2024-05-11T03:46:57Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-05-10T23:44:39Z
--- license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - generated_from_trainer model-index: - name: Phi0503HMA8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Phi0503HMA8 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.6142 | 0.09 | 10 | 1.3143 | | 0.5476 | 0.18 | 20 | 0.2468 | | 0.3629 | 0.27 | 30 | 0.2346 | | 0.2414 | 0.36 | 40 | 0.2269 | | 0.2169 | 0.45 | 50 | 0.1917 | | 0.1925 | 0.54 | 60 | 0.1844 | | 0.1977 | 0.63 | 70 | 0.1833 | | 0.1755 | 0.73 | 80 | 0.1650 | | 0.1689 | 0.82 | 90 | 0.1627 | | 0.1552 | 0.91 | 100 | 0.7844 | | 0.2892 | 1.0 | 110 | 0.1149 | | 1.1144 | 1.09 | 120 | 0.1357 | | 0.1033 | 1.18 | 130 | 0.0829 | | 0.098 | 1.27 | 140 | 0.0898 | | 0.0863 | 1.36 | 150 | 0.0845 | | 0.0913 | 1.45 | 160 | 0.0791 | | 0.0782 | 1.54 | 170 | 0.0708 | | 0.0804 | 1.63 | 180 | 0.0786 | | 0.089 | 1.72 | 190 | 0.2288 | | 0.3087 | 1.81 | 200 | 0.1845 | | 0.449 | 1.9 | 210 | 0.3669 | | 0.7395 | 1.99 | 220 | 0.3523 | | 0.5132 | 2.08 | 230 | 0.1956 | | 0.1939 | 2.18 | 240 | 0.1647 | | 0.1612 | 2.27 | 250 | 0.1673 | | 0.1638 | 2.36 | 260 | 0.1636 | | 0.1617 | 2.45 | 270 | 0.1634 | | 0.1617 | 2.54 | 280 | 0.1640 | | 0.1626 | 2.63 | 290 | 0.1641 | | 0.1635 | 2.72 | 300 | 0.1634 | | 0.1638 | 2.81 | 310 | 0.1632 | | 0.162 | 2.9 | 320 | 0.1630 | | 0.1659 | 2.99 | 330 | 0.1630 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
code801/test
code801
2024-05-11T03:32:11Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-11T03:32:11Z
--- license: apache-2.0 ---
kali1/ko-address-whisper-medium-processor
kali1
2024-05-11T03:29:20Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T03:29:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Raneechu/litreviewUnsupervised
Raneechu
2024-05-11T03:27:53Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-05-11T03:27:49Z
--- license: llama2 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-hf model-index: - name: litreview results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # litreview This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.2986 | 0.0013 | 1 | 3.2847 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.19.1 ## Training procedure ### Framework versions - PEFT 0.6.2
LoneStriker/dolphin-2.9.1-llama-3-8b-8.0bpw-h8-exl2
LoneStriker
2024-05-11T03:26:47Z
8
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "conversational", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-05-11T03:23:19Z
--- license: other base_model: meta-llama/Meta-Llama-3-8B tags: - generated_from_trainer - axolotl model-index: - name: out results: [] datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- # Dolphin 2.9.1 Llama 3 8b 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> We have retrained our LLama-3-8b fine tune to address behavioral issues in the initial 2.9 dataset. Specifically, Systemchat was causing the model to be *too* reliant on the system prompt. Additionally, it had an occasional quirk that would cause the model to overly reference the system prompt. We also found generation length was at times not sufficient for any given task. We identified the culprit as Ultrachat. Accounting for these concerns, we removed systemchat and ultrachat from the dataset. It is otherwise identical to dolphin-2.9. Our appreciation for the sponsors of Dolphin 2.9.1: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length. It took 1.5 days on an 8x L40S provided by Crusoe Cloud This model was trained FFT on all parameters, using ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed according to Meta's Llama license. We grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/0pqSc8jsJlhBH8dcgpwE7.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer tokenizer_use_fast: false load_in_8bit: false load_in_4bit: false strict: false model_config: datasets: - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy val_set_size: 0.0002 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true gradient_accumulation_steps: 4 micro_batch_size: 3 num_epochs: 3 logging_steps: 1 optimizer: adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 wandb_project: dolphin-2.9-mixtral-8x22b wandb_watch: wandb_run_id: wandb_log_model: train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true saves_per_epoch: 4 save_total_limit: 2 save_steps: evals_per_epoch: 4 eval_sample_packing: false debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
LoneStriker/dolphin-2.9.1-llama-3-8b-6.0bpw-h6-exl2
LoneStriker
2024-05-11T03:23:16Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "conversational", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-05-11T03:20:30Z
--- license: other base_model: meta-llama/Meta-Llama-3-8B tags: - generated_from_trainer - axolotl model-index: - name: out results: [] datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- # Dolphin 2.9.1 Llama 3 8b 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> We have retrained our LLama-3-8b fine tune to address behavioral issues in the initial 2.9 dataset. Specifically, Systemchat was causing the model to be *too* reliant on the system prompt. Additionally, it had an occasional quirk that would cause the model to overly reference the system prompt. We also found generation length was at times not sufficient for any given task. We identified the culprit as Ultrachat. Accounting for these concerns, we removed systemchat and ultrachat from the dataset. It is otherwise identical to dolphin-2.9. Our appreciation for the sponsors of Dolphin 2.9.1: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length. It took 1.5 days on an 8x L40S provided by Crusoe Cloud This model was trained FFT on all parameters, using ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed according to Meta's Llama license. We grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/0pqSc8jsJlhBH8dcgpwE7.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer tokenizer_use_fast: false load_in_8bit: false load_in_4bit: false strict: false model_config: datasets: - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy val_set_size: 0.0002 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true gradient_accumulation_steps: 4 micro_batch_size: 3 num_epochs: 3 logging_steps: 1 optimizer: adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 wandb_project: dolphin-2.9-mixtral-8x22b wandb_watch: wandb_run_id: wandb_log_model: train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true saves_per_epoch: 4 save_total_limit: 2 save_steps: evals_per_epoch: 4 eval_sample_packing: false debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf
RichardErkhov
2024-05-11T03:22:02Z
22
0
null
[ "gguf", "arxiv:2308.07317", "arxiv:2307.09288", "endpoints_compatible", "region:us" ]
null
2024-05-11T01:29:54Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Platypus2-7B - GGUF - Model creator: https://huggingface.co/garage-bAInd/ - Original model: https://huggingface.co/garage-bAInd/Platypus2-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Platypus2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q2_K.gguf) | Q2_K | 2.36GB | | [Platypus2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.IQ3_XS.gguf) | IQ3_XS | 2.6GB | | [Platypus2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.IQ3_S.gguf) | IQ3_S | 2.75GB | | [Platypus2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [Platypus2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.IQ3_M.gguf) | IQ3_M | 2.9GB | | [Platypus2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q3_K.gguf) | Q3_K | 3.07GB | | [Platypus2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [Platypus2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [Platypus2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [Platypus2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q4_0.gguf) | Q4_0 | 3.56GB | | [Platypus2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [Platypus2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [Platypus2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q4_K.gguf) | Q4_K | 3.8GB | | [Platypus2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [Platypus2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q4_1.gguf) | Q4_1 | 3.95GB | | [Platypus2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q5_0.gguf) | Q5_0 | 4.33GB | | [Platypus2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [Platypus2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q5_K.gguf) | Q5_K | 4.45GB | | [Platypus2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [Platypus2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q5_1.gguf) | Q5_1 | 4.72GB | | [Platypus2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf/blob/main/Platypus2-7B.Q6_K.gguf) | Q6_K | 5.15GB | Original model description: --- license: cc-by-nc-sa-4.0 language: - en datasets: - garage-bAInd/Open-Platypus --- # Platypus2-7B **NOTE**: There is some issue with LLaMa-2 7B and fine-tuning only works if you use `fp16=False` and `bf16=True` in the HF trainer. Gathering more intel on this but if you have any thoughts about this issue or performance, please let us know! Platypus-7B is an instruction fine-tuned model based on the LLaMA2-7B transformer architecture. ![Platty](./Best_Platty_small.jpeg) ### Model Details * **Trained by**: Cole Hunter & Ariel Lee * **Model type:** **Platypus2-7B** is an auto-regressive language model based on the LLaMA2 transformer architecture. * **Language(s)**: English * **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) ### Prompt Template ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ### Training Dataset `garage-bAInd/Platypus2-7B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. ### Training Procedure `garage-bAInd/Platypus2-7B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus2](https://github.com/arielnlee/Platypus) GitHub repo. ### Reproducing Evaluation Results Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # change to repo directory cd lm-evaluation-harness # install pip install -e . ``` Each task was evaluated on 1 A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks arc_challenge --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks hellaswag --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks hendrycksTest-* --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-7B,use_accelerate=True,dtype="bfloat16" --tasks truthfulqa_mc --batch_size 2 --no_cache --write_out --output_path results/Platypus2-7B/truthfulqa_0shot.json --device cuda ``` ### Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ### Citations ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, } ``` ```bibtex @inproceedings{ hu2022lora, title={Lo{RA}: Low-Rank Adaptation of Large Language Models}, author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=nZeVKeeFYf9} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_garage-bAInd__Platypus2-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 45.69 | | ARC (25-shot) | 55.2 | | HellaSwag (10-shot) | 78.84 | | MMLU (5-shot) | 49.83 | | TruthfulQA (0-shot) | 40.64 | | Winogrande (5-shot) | 73.48 | | GSM8K (5-shot) | 1.82 | | DROP (3-shot) | 20.02 |
lodrick-the-lafted/Fuselage-8B
lodrick-the-lafted
2024-05-11T03:15:43Z
72
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T23:38:25Z
--- license: apache-2.0 --- <img src=https://huggingface.co/lodrick-the-lafted/Fuselage-8B/resolve/main/fuselage.png> Fuselage-8B It's [Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total](https://huggingface.co/Edgerunners/meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total) But with extra ablations where the harmless dataset was OpusStories and the harmful dataset was llm-attacks.
mradermacher/Mistral-9B-Instruct-GGUF
mradermacher
2024-05-11T03:12:35Z
37
0
transformers
[ "transformers", "gguf", "en", "base_model:cloudyu/Mistral-9B-Instruct", "base_model:quantized:cloudyu/Mistral-9B-Instruct", "license:cc", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-11T02:40:17Z
--- base_model: cloudyu/Mistral-9B-Instruct language: - en library_name: transformers license: cc quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/cloudyu/Mistral-9B-Instruct <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.Q2_K.gguf) | Q2_K | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.Q3_K_S.gguf) | Q3_K_S | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.IQ3_M.gguf) | IQ3_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.IQ4_XS.gguf) | IQ4_XS | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.Q5_K_S.gguf) | Q5_K_S | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.Q5_K_M.gguf) | Q5_K_M | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.Q6_K.gguf) | Q6_K | 7.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.Q8_0.gguf) | Q8_0 | 9.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-9B-Instruct-GGUF/resolve/main/Mistral-9B-Instruct.f16.gguf) | f16 | 18.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
LoneStriker/dolphin-2.9.1-llama-3-8b-GGUF
LoneStriker
2024-05-11T03:06:16Z
21
2
null
[ "gguf", "generated_from_trainer", "axolotl", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-11T02:54:06Z
--- license: other base_model: meta-llama/Meta-Llama-3-8B tags: - generated_from_trainer - axolotl model-index: - name: out results: [] datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- # Dolphin 2.9.1 Llama 3 8b 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> We have retrained our LLama-3-8b fine tune to address behavioral issues in the initial 2.9 dataset. Specifically, Systemchat was causing the model to be *too* reliant on the system prompt. Additionally, it had an occasional quirk that would cause the model to overly reference the system prompt. We also found generation length was at times not sufficient for any given task. We identified the culprit as Ultrachat. Accounting for these concerns, we removed systemchat and ultrachat from the dataset. It is otherwise identical to dolphin-2.9. Our appreciation for the sponsors of Dolphin 2.9.1: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length. It took 1.5 days on an 8x L40S provided by Crusoe Cloud This model was trained FFT on all parameters, using ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed according to Meta's Llama license. We grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/0pqSc8jsJlhBH8dcgpwE7.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer tokenizer_use_fast: false load_in_8bit: false load_in_4bit: false strict: false model_config: datasets: - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy val_set_size: 0.0002 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true gradient_accumulation_steps: 4 micro_batch_size: 3 num_epochs: 3 logging_steps: 1 optimizer: adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 wandb_project: dolphin-2.9-mixtral-8x22b wandb_watch: wandb_run_id: wandb_log_model: train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true saves_per_epoch: 4 save_total_limit: 2 save_steps: evals_per_epoch: 4 eval_sample_packing: false debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
pszemraj/bert-L-4_H-128_A-2-OCR-quality
pszemraj
2024-05-11T03:02:16Z
111
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google/bert_uncased_L-4_H-128_A-2", "base_model:finetune:google/bert_uncased_L-4_H-128_A-2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-11T02:55:00Z
--- license: apache-2.0 base_model: google/bert_uncased_L-4_H-128_A-2 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert_uncased_L-4_H-128_A-2-OCR-quality-classification-cls results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_uncased_L-4_H-128_A-2-OCR-quality-classification-cls This model is a fine-tuned version of [google/bert_uncased_L-4_H-128_A-2](https://huggingface.co/google/bert_uncased_L-4_H-128_A-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0422 - Accuracy: 0.99 - Num Input Tokens Seen: 57341952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:| | 0.1123 | 0.2660 | 250 | 0.1202 | 0.974 | 8192000 | | 0.072 | 0.5321 | 500 | 0.0665 | 0.986 | 16384000 | | 0.0404 | 0.7981 | 750 | 0.0464 | 0.988 | 24576000 | | 0.0255 | 1.0641 | 1000 | 0.0428 | 0.99 | 32765952 | | 0.0253 | 1.3301 | 1250 | 0.0357 | 0.99 | 40957952 | | 0.0329 | 1.5962 | 1500 | 0.0438 | 0.986 | 49149952 | | 0.0435 | 1.8622 | 1750 | 0.0422 | 0.99 | 57341952 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf
RichardErkhov
2024-05-11T02:50:41Z
27
0
null
[ "gguf", "arxiv:2005.01643", "arxiv:2309.11235", "arxiv:2006.09092", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-11T00:44:20Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Bielik-7B-Instruct-v0.1 - GGUF - Model creator: https://huggingface.co/speakleash/ - Original model: https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Bielik-7B-Instruct-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q2_K.gguf) | Q2_K | 2.53GB | | [Bielik-7B-Instruct-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Bielik-7B-Instruct-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Bielik-7B-Instruct-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Bielik-7B-Instruct-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Bielik-7B-Instruct-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q3_K.gguf) | Q3_K | 3.28GB | | [Bielik-7B-Instruct-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Bielik-7B-Instruct-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Bielik-7B-Instruct-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Bielik-7B-Instruct-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q4_0.gguf) | Q4_0 | 3.83GB | | [Bielik-7B-Instruct-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Bielik-7B-Instruct-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Bielik-7B-Instruct-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q4_K.gguf) | Q4_K | 4.07GB | | [Bielik-7B-Instruct-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Bielik-7B-Instruct-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q4_1.gguf) | Q4_1 | 4.24GB | | [Bielik-7B-Instruct-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q5_0.gguf) | Q5_0 | 4.65GB | | [Bielik-7B-Instruct-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Bielik-7B-Instruct-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q5_K.gguf) | Q5_K | 4.78GB | | [Bielik-7B-Instruct-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Bielik-7B-Instruct-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q5_1.gguf) | Q5_1 | 5.07GB | | [Bielik-7B-Instruct-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/speakleash_-_Bielik-7B-Instruct-v0.1-gguf/blob/main/Bielik-7B-Instruct-v0.1.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: --- license: cc-by-nc-4.0 language: - pl library_name: transformers tags: - finetuned inference: parameters: temperature: 0.6 widget: - messages: - role: user content: Co przedstawia polskie godło? --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/speakleash_cyfronet.png"> </p> # Bielik-7B-Instruct-v0.1 The Bielik-7B-Instruct-v0.1 is an instruct fine-tuned version of the [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1). Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH. The creation and training of the Bielik-7B-Instruct-v0.1 was propelled by the support of computational grant number PLG/2024/016951, conducted on the Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision. [We have prepared quantized versions of the model as well as MLX format.](#quant-and-mlx-versions) ## Model The [SpeakLeash](https://speakleash.org/) team is working on their own set of instructions in Polish, which is continuously being expanded and refined by annotators. A portion of these instructions, which had been manually verified and corrected, has been utilized for training purposes. Moreover, due to the limited availability of high-quality instructions in Polish, publicly accessible collections of instructions in English were used - [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) and [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k), which accounted for half of the instructions used in training. The instructions varied in quality, leading to a deterioration in model’s performance. To counteract this while still allowing ourselves to utilize forementioned datasets,several improvements were introduced: * Weighted tokens level loss - a strategy inspired by [offline reinforcement learning](https://arxiv.org/abs/2005.01643) and [C-RLFT](https://arxiv.org/abs/2309.11235) * Adaptive learning rate inspired by the study on [Learning Rates as a Function of Batch Size](https://arxiv.org/abs/2006.09092) * Masked user instructions Bielik-7B-Instruct-v0.1 has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo) implemented by [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/). This framework allows users to train language models with architecture similar to LLaMA and Mistral in fast and efficient way. ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) * **Language:** Polish * **Model type:** causal decoder-only * **Finetuned from:** [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1) * **License:** CC BY NC 4.0 (non-commercial use) * **Model ref:** speakleash:e38140bea0d48f1218540800bbc67e89 ## Training * Framework: [ALLaMo](https://github.com/chrisociepa/allamo) * Visualizations: [W&B](https://wandb.ai) <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/sft_train_loss.png"> </p> <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/sft_train_ppl.png"> </p> <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/sft_train_lr.png"> </p> ### Training hyperparameters: | **Hyperparameter** | **Value** | |-----------------------------|------------------| | Context length | 4096 | | Micro Batch Size | 1 | | Batch Size | up to 4194304 | | Learning Rate (cosine, adaptive) | 7e-6 -> 6e-7 | | Warmup Iterations | 50 | | All Iterations | 55440 | | Optimizer | AdamW | | β1, β2 | 0.9, 0.95 | | Adam_eps | 1e−8 | | Weight Decay | 0.05 | | Grad Clip | 1.0 | | Precision | bfloat16 (mixed) | ### Quant and MLX versions: We know that some people want to explore smaller models or don't have the resources to run a full model. Therefore, we have prepared quantized versions of the Bielik-7B-Instruct-v0.1 model. We are also mindful of Apple Silicon. <br> <br> Quantized versions (for non-GPU / weaker GPU): - https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF - https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GPTQ - https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-AWQ - https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-EXL2 - https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-3bit-HQQ For Apple Silicon: - https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-MLX ### Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should start with the beginning of a sentence token. The generated completion will be finished by the end-of-sentence token. E.g. ``` prompt = "<s>[INST] Jakie mamy pory roku? [/INST]" completion = "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.</s>" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model_name = "speakleash/Bielik-7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) messages = [ {"role": "system", "content": "Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim."}, {"role": "user", "content": "Jakie mamy pory roku w Polsce?"}, {"role": "assistant", "content": "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima."}, {"role": "user", "content": "Która jest najcieplejsza?"} ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = input_ids.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` If for some reason you are unable to use `tokenizer.apply_chat_template`, the following code will enable you to generate a correct prompt: ```python def chat_template(message, history, system_prompt): prompt_builder = ["<s>[INST] "] if system_prompt: prompt_builder.append(f"<<SYS>>\n{system_prompt}\n<</SYS>>\n\n") for human, assistant in history: prompt_builder.append(f"{human} [/INST] {assistant}</s>[INST] ") prompt_builder.append(f"{message} [/INST]") return ''.join(prompt_builder) system_prompt = "Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim." history = [ ("Jakie mamy pory roku w Polsce?", "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.") ] message = "Która jest najcieplejsza?" prompt = chat_template(message, history, system_prompt) ``` ## Evaluation Models have been evaluated on [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) 5-shot. The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Here are presented: - Average - average score among all tasks normalized by baseline scores - Reranking - reranking task, commonly used in RAG - Reader (Generator) - open book question answering task, commonly used in RAG - Perplexity (lower is better) - as a bonus, does not correlate with other scores and should not be used for model comparison As of April 3, 2024, the following table showcases the current scores of pretrained and continuously pretrained models according to the Open PL LLM Leaderboard, evaluated in a 5-shot setting: | | Average | RAG Reranking | RAG Reader | Perplexity | |--------------------------------------------------------------------------------------|----------:|--------------:|-----------:|-----------:| | **7B parameters models:** | | | | | | Baseline (majority class) | 0.00 | 53.36 | - | - | | Voicelab/trurl-2-7b | 18.85 | 60.67 | 77.19 | 1098.88 | | meta-llama/Llama-2-7b-chat-hf | 21.04 | 54.65 | 72.93 | 4018.74 | | mistralai/Mistral-7B-Instruct-v0.1 | 26.42 | 56.35 | 73.68 | 6909.94 | | szymonrucinski/Curie-7B-v1 | 26.72 | 55.58 | 85.19 | 389.17 | | HuggingFaceH4/zephyr-7b-beta | 33.15 | 71.65 | 71.27 | 3613.14 | | HuggingFaceH4/zephyr-7b-alpha | 33.97 | 71.47 | 73.35 | 4464.45 | | internlm/internlm2-chat-7b-sft | 36.97 | 73.22 | 69.96 | 4269.63 | | internlm/internlm2-chat-7b | 37.64 | 72.29 | 71.17 | 3892.50 | | [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1) | 39.28 | 61.89 | **86.00** | 277.92 | | mistralai/Mistral-7B-Instruct-v0.2 | 40.29 | 72.58 | 79.39 | 2088.08 | | teknium/OpenHermes-2.5-Mistral-7B | 42.64 | 70.63 | 80.25 | 1463.00 | | openchat/openchat-3.5-1210 | 44.17 | 71.76 | 82.15 | 1923.83 | | speakleash/mistral_7B-v2/spkl-all_sft_v2/e1_base/spkl-all_2e6-e1_70c70cc6 (experimental) | 45.44 | 71.27 | 91.50 | 279.24 | | Nexusflow/Starling-LM-7B-beta | 45.69 | 74.58 | 81.22 | 1161.54 | | openchat/openchat-3.5-0106 | 47.32 | 74.71 | 83.60 | 1106.56 | | berkeley-nest/Starling-LM-7B-alpha | **47.46** | **75.73** | 82.86 | 1438.04 | | | | | | | | **Models with different sizes:** | | | | | | Azurro/APT3-1B-Instruct-v1 (1B) | -13.80 | 52.11 | 12.23 | 739.09 | | Voicelab/trurl-2-13b-academic (13B) | 29.45 | 68.19 | 79.88 | 733.91 | | upstage/SOLAR-10.7B-Instruct-v1.0 (10.7B) | 46.07 | 76.93 | 82.86 | 789.58 | | | | | | | | **7B parameters pretrained and continously pretrained models:** | | | | | | OPI-PG/Qra-7b | 11.13 | 54.40 | 75.25 | 203.36 | | meta-llama/Llama-2-7b-hf | 12.73 | 54.02 | 77.92 | 850.45 | | internlm/internlm2-base-7b | 20.68 | 52.39 | 69.85 | 3110.92 | | [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1) | 29.38 | 62.13 | **88.39** | 123.31 | | mistralai/Mistral-7B-v0.1 | 30.67 | 60.35 | 85.39 | 857.32 | | internlm/internlm2-7b | 33.03 | 69.39 | 73.63 | 5498.23 | | alpindale/Mistral-7B-v0.2-hf | 33.05 | 60.23 | 85.21 | 932.60 | | speakleash/mistral-apt3-7B/spi-e0_hf (experimental) | 35.50 | 62.14 | **87.48** | 132.78 | SpeakLeash models have one of the best scores in the RAG Reader task. We have managed to increase Average score by almost 9 pp. in comparison to Mistral-7B-v0.1. In our subjective evaluations of chatting skills SpeakLeash models perform better than other models with higher Average scores. The results in the above table were obtained without utilizing instruction templates for instructional models, instead treating them like base models. This approach could skew the results, as instructional models are optimized with specific instructions in mind. ## Limitations and Biases Bielik-7B-Instruct-v0.1 is a quick demonstration that the base model can be easily fine-tuned to achieve compelling and promising performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community in ways to make the model respect guardrails, allowing for deployment in environments requiring moderated outputs. Bielik-7B-Instruct-v0.1 can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-7B-Instruct-v0.1 was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs. ## License Because of an unclear legal situation, we have decided to publish the model under CC BY NC 4.0 license - it allows for non-commercial use. The model can be used for scientific purposes and privately, as long as the license conditions are met. ## Citation Please cite this model using the following format: ``` @misc{Bielik7Bv01, title = {Introducing Bielik-7B-Instruct-v0.1: Instruct Polish Language Model}, author = {Ociepa, Krzysztof and Flis, Łukasz and Wróbel, Krzysztof and Kondracki, Sebastian and {SpeakLeash Team} and {Cyfronet Team}}, year = {2024}, url = {https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1}, note = {Accessed: 2024-04-01}, % change this date urldate = {2024-04-01} % change this date } ``` ## Responsible for training the model * [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training * [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training * [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks * [Sebastian Kondracki](https://www.linkedin.com/in/sebastian-kondracki/)<sup>SpeakLeash</sup> - coordinating and preparation of instructions * [Maria Filipkowska](https://www.linkedin.com/in/maria-filipkowska/)<sup>SpeakLeash</sup> - preparation of instructions * [Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/)<sup>SpeakLeash</sup> - preparation of instructions * [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data quality and instructions cleaning * [Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/)<sup>SpeakLeash</sup> - instructions cleaning * [Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/)<sup>SpeakLeash</sup> - instructions cleaning * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - providing quantized models * [Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/)<sup>SpeakLeash</sup> - providing quantized models The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model through their commitment to the open-science SpeakLeash project: [Grzegorz Urbanowicz](https://www.linkedin.com/in/grzegorz-urbanowicz-05823469/), [Paweł Cyrta](https://www.linkedin.com/in/cyrta), [Jan Maria Kowalski](https://www.linkedin.com/in/janmariakowalski/), [Karol Jezierski](https://www.linkedin.com/in/karol-jezierski/), [Kamil Nonckiewicz](https://www.linkedin.com/in/kamil-nonckiewicz/), [Izabela Babis](https://www.linkedin.com/in/izabela-babis-2274b8105/), [Nina Babis](https://www.linkedin.com/in/nina-babis-00055a140/), [Waldemar Boszko](https://www.linkedin.com/in/waldemarboszko), and many other wonderful researchers and enthusiasts of the AI world. Members of the ACK Cyfronet AGH team providing valuable support and expertise: [Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/). ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/3G9DVM39).
vyshnavid/emotions_mteb_qlora_2_im
vyshnavid
2024-05-11T02:37:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T02:37:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
karoldobiczek/roberta-large-fomc
karoldobiczek
2024-05-11T02:37:28Z
108
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-large", "base_model:finetune:FacebookAI/roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-08T12:04:18Z
--- license: mit base_model: roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large-fomc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-fomc This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7874 - Accuracy: 0.6660 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.0083 | 1 | 1.0582 | 0.4980 | | 1.0574 | 0.2149 | 26 | 1.0428 | 0.4980 | | 1.0702 | 0.4215 | 51 | 1.0500 | 0.4980 | | 1.1065 | 0.6281 | 76 | 1.0374 | 0.4980 | | 1.0241 | 0.8347 | 101 | 1.0391 | 0.4980 | | 1.0324 | 1.0 | 121 | 1.0191 | 0.4980 | | 1.0324 | 1.0413 | 126 | 1.0097 | 0.4980 | | 0.9751 | 1.2479 | 151 | 1.0542 | 0.4737 | | 1.0134 | 1.4545 | 176 | 0.9746 | 0.5931 | | 0.9276 | 1.6612 | 201 | 0.8633 | 0.5648 | | 0.8469 | 1.8678 | 226 | 0.7729 | 0.6538 | | 0.7992 | 2.0 | 242 | 0.7874 | 0.6660 | | 0.8853 | 2.0744 | 251 | 0.8597 | 0.6680 | | 0.6466 | 2.2810 | 276 | 0.7767 | 0.6498 | | 0.778 | 2.4876 | 301 | 1.0588 | 0.6498 | | 0.7202 | 2.6942 | 326 | 0.7493 | 0.6721 | | 0.7108 | 2.9008 | 351 | 0.8892 | 0.6397 | | 0.6354 | 3.0 | 363 | 0.8265 | 0.6579 | | 0.7704 | 3.1074 | 376 | 0.7833 | 0.6781 | | 0.6867 | 3.3140 | 401 | 0.9702 | 0.6478 | | 0.6973 | 3.5207 | 426 | 1.0300 | 0.6700 | | 0.6682 | 3.7273 | 451 | 0.8206 | 0.6781 | | 0.6605 | 3.9339 | 476 | 0.8862 | 0.6822 | | 0.8521 | 4.0 | 484 | 0.8093 | 0.6316 | | 0.6442 | 4.1405 | 501 | 0.9483 | 0.6437 | | 0.577 | 4.3471 | 526 | 0.8860 | 0.6883 | | 0.5252 | 4.5537 | 551 | 0.8797 | 0.7045 | | 0.5274 | 4.7603 | 576 | 0.7289 | 0.7024 | | 0.467 | 4.9669 | 601 | 0.8224 | 0.6903 | | 0.467 | 5.0 | 605 | 0.8218 | 0.6903 | ### Framework versions - Transformers 4.40.2 - Pytorch 1.12.0 - Datasets 2.19.1 - Tokenizers 0.19.1
saaduddinM/Mamba1.4B_mul_large
saaduddinM
2024-05-11T02:33:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-09T12:32:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EmnaFazaa/layoutlmv3-financial-document-classification
EmnaFazaa
2024-05-11T02:19:58Z
47
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-23T22:44:38Z
--- pipeline_tag: image-classification ---
NadunAnjanaka/Llama-2-7b-chat-Counsellor
NadunAnjanaka
2024-05-11T02:19:18Z
1
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "dataset:nbertagnolli/counsel-chat", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T04:37:16Z
--- datasets: - nbertagnolli/counsel-chat --- This is a LLM fine tuned for Early Identification and Virtual Counselling of Mental Health Issues. Disclaimer: Please note that this is developed for our undergraduate electives project at Faculty of Medicine, University of Colombo, Sri Lanka which is only for education purpose. Thank you --- license: llama2 language: - en tags: - medical ---
yifanxie/noisy-partridge
yifanxie
2024-05-11T02:14:25Z
140
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-11T02:12:30Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [google/gemma-2b](https://huggingface.co/google/gemma-2b) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed. ```bash pip install transformers==4.40.1 ``` Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo. - Either leave `token=True` in the `pipeline` and login to hugginface_hub by running ```python import huggingface_hub huggingface_hub.login(<ACCESS_TOKEN>) ``` - Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline` ```python from transformers import pipeline generate_text = pipeline( model="yifanxie/noisy-partridge", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, token=True, ) # generate configuration can be modified to your needs # generate_text.model.generation_config.min_new_tokens = 2 # generate_text.model.generation_config.max_new_tokens = 256 # generate_text.model.generation_config.do_sample = False # generate_text.model.generation_config.num_beams = 1 # generate_text.model.generation_config.temperature = float(0.0) # generate_text.model.generation_config.repetition_penalty = float(1.0) res = generate_text( "Why is drinking water so healthy?", renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<eos><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "yifanxie/noisy-partridge", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "yifanxie/noisy-partridge", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) # generate configuration can be modified to your needs # generate_text.model.generation_config.min_new_tokens = 2 # generate_text.model.generation_config.max_new_tokens = 256 # generate_text.model.generation_config.do_sample = False # generate_text.model.generation_config.num_beams = 1 # generate_text.model.generation_config.temperature = float(0.0) # generate_text.model.generation_config.repetition_penalty = float(1.0) res = generate_text( "Why is drinking water so healthy?", renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "yifanxie/noisy-partridge" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<eos><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs # model.generation_config.min_new_tokens = 2 # model.generation_config.max_new_tokens = 256 # model.generation_config.do_sample = False # model.generation_config.num_beams = 1 # model.generation_config.temperature = float(0.0) # model.generation_config.repetition_penalty = float(1.0) tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` GemmaForCausalLM( (model): GemmaModel( (embed_tokens): Embedding(256000, 2048, padding_idx=0) (layers): ModuleList( (0-17): 18 x GemmaDecoderLayer( (self_attn): GemmaSdpaAttention( (q_proj): Linear(in_features=2048, out_features=2048, bias=False) (k_proj): Linear(in_features=2048, out_features=256, bias=False) (v_proj): Linear(in_features=2048, out_features=256, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): GemmaRotaryEmbedding() ) (mlp): GemmaMLP( (gate_proj): Linear(in_features=2048, out_features=16384, bias=False) (up_proj): Linear(in_features=2048, out_features=16384, bias=False) (down_proj): Linear(in_features=16384, out_features=2048, bias=False) (act_fn): PytorchGELUTanh() ) (input_layernorm): GemmaRMSNorm() (post_attention_layernorm): GemmaRMSNorm() ) ) (norm): GemmaRMSNorm() ) (lm_head): Linear(in_features=2048, out_features=256000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
Acopa/ControlNet_SDXL_256
Acopa
2024-05-11T02:14:15Z
6
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-11T01:33:58Z
--- license: openrail++ library_name: diffusers tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet - diffusers-training base_model: stabilityai/stable-diffusion-xl-base-1.0 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-Acopa/ControlNet_SDXL_256 These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
ddh0/Phi-3-mini-4k-instruct-bf16-GGUF
ddh0
2024-05-11T02:12:06Z
58
3
null
[ "gguf", "text-generation", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-10T21:28:56Z
--- license: llama3 pipeline_tag: text-generation --- # Phi-3-mini-4k-instruct-bf16-GGUF This is [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct), converted to GGUF without changing tensor data type.
animaRegem/gemma-2b-malayalam-merged
animaRegem
2024-05-11T02:11:42Z
138
1
transformers
[ "transformers", "pytorch", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "base_model:finetune:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T02:06:25Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-2b-bnb-4bit --- # Uploaded model - **Developed by:** animaRegem - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
tistak/sn6-0
tistak
2024-05-11T02:11:30Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-03T07:24:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yuxinhhh/hbj
yuxinhhh
2024-05-11T01:59:22Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-05-11T01:59:22Z
--- license: bigscience-bloom-rail-1.0 ---
mikeHak/lora-1.5xl-acne-face-01
mikeHak
2024-05-11T01:56:54Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-11T01:54:57Z
--- license: apache-2.0 ---
ArkanDash/rvc-genshin-impact
ArkanDash
2024-05-11T01:55:30Z
0
212
null
[ "rvc", "audio-to-audio", "ja", "license:mit", "region:us" ]
audio-to-audio
2023-05-17T10:54:23Z
--- license: mit language: - ja metrics: - accuracy pipeline_tag: audio-to-audio tags: - rvc --- # <center> RVC Genshin Impact Japanese Voice Model # I'M NO LONGER CONTINUING THIS PROJECT. ![model-cover.png](https://huggingface.co/ArkanDash/rvc-genshin-impact/resolve/main/model-cover.png) ## About Retrieval based Voice Conversion (RVC) Learn more about Retrieval based Voice Conversion in this link below: [RVC WebUI](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI) ## How to use? Download the prezipped model and put to your RVC Project. Model test: [Google Colab](https://colab.research.google.com/drive/110kiMZTdP6Ri1lY9-NbQf17GVPPhHyeT?usp=sharing) / [RVC Models New](https://huggingface.co/spaces/ArkanDash/rvc-models-new) (Which is basically the same but hosted on spaces) ## <center> INFO Model Created by ArkanDash<br /> The voice that was used in this model belongs to Hoyoverse.<br /> The voice I make to make this model was ripped from the game (3.6 - 4.2). [All Model Info](https://huggingface.co/ArkanDash/rvc-genshin-impact/blob/main/model_info.md) #### Total Character Model: 62 Models ## Warning I'm not responsible for the output of this model. Use wisely.
KaggleMasterX/llama3_orpo_5k_ins
KaggleMasterX
2024-05-11T01:52:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T01:50:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
eirikrawr/lenchmobno
eirikrawr
2024-05-11T01:51:36Z
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:ByteDance/Hyper-SD", "base_model:adapter:ByteDance/Hyper-SD", "license:unknown", "region:us" ]
text-to-image
2024-05-11T01:51:22Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: gg parameters: negative_prompt: s output: url: images/108042359_139458894451596_44214374104752068_n.jpg base_model: ByteDance/Hyper-SD instance_prompt: grimtorkil license: unknown license_name: ahreketil license_link: LICENSE --- # Lillehagen <Gallery /> ## Model description Voice of Charachter https:&#x2F;&#x2F;cdn-uploads.huggingface.co&#x2F;production&#x2F;uploads&#x2F;663ecc03e7d4af2dcc91978a&#x2F;RNV7mGXhFwyA7wv5HX11s.mp4 ## Trigger words You should use `grimtorkil` to trigger the image generation. ## Download model [Download](/eirikrawr/lenchmobno/tree/main) them in the Files & versions tab.
guilhermebastos96/mms_por_finetuning_fran_1000_balanceado
guilhermebastos96
2024-05-11T01:48:09Z
106
0
transformers
[ "transformers", "safetensors", "vits", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2024-05-09T06:26:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AnhDuc2507/model_weight_1
AnhDuc2507
2024-05-11T01:29:14Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_11_0", "base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h", "base_model:finetune:nguyenvulebinh/wav2vec2-base-vietnamese-250h", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-10T11:04:36Z
--- license: cc-by-nc-4.0 base_model: nguyenvulebinh/wav2vec2-base-vietnamese-250h tags: - generated_from_trainer datasets: - common_voice_11_0 metrics: - wer model-index: - name: model_weight_1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_11_0 type: common_voice_11_0 config: vi split: None args: vi metrics: - name: Wer type: wer value: 0.1265011243481173 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_weight_1 This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.1739 - Wer: 0.1265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:-----:|:---------------:|:------:| | 14.4773 | 1.3928 | 500 | 5.1397 | 1.0002 | | 4.442 | 2.7855 | 1000 | 5.1727 | 1.0 | | 3.9171 | 4.1783 | 1500 | 3.4650 | 0.9913 | | 3.2597 | 5.5710 | 2000 | 2.1658 | 0.8943 | | 2.5676 | 6.9638 | 2500 | 1.4240 | 0.7346 | | 2.0229 | 8.3565 | 3000 | 0.9604 | 0.5685 | | 1.6744 | 9.7493 | 3500 | 0.9651 | 0.4661 | | 1.4788 | 11.1421 | 4000 | 0.7943 | 0.4500 | | 1.3045 | 12.5348 | 4500 | 0.6500 | 0.3282 | | 1.3199 | 13.9276 | 5000 | 0.4307 | 0.3130 | | 1.1017 | 15.3203 | 5500 | 0.7321 | 0.2742 | | 1.0042 | 16.7131 | 6000 | 0.9041 | 0.2408 | | 1.0219 | 18.1058 | 6500 | 0.6662 | 0.2374 | | 0.9303 | 19.4986 | 7000 | 0.7430 | 0.2171 | | 0.8425 | 20.8914 | 7500 | 1.5198 | 0.1954 | | 0.8409 | 22.2841 | 8000 | 0.6491 | 0.1982 | | 0.881 | 23.6769 | 8500 | 0.6060 | 0.1734 | | 0.8061 | 25.0696 | 9000 | 0.4495 | 0.1607 | | 0.7404 | 26.4624 | 9500 | 0.6027 | 0.1630 | | 0.713 | 27.8552 | 10000 | 0.5014 | 0.1542 | | 0.7678 | 29.2479 | 10500 | 0.2076 | 0.1491 | | 0.7059 | 30.6407 | 11000 | 0.2030 | 0.1497 | | 0.6873 | 32.0334 | 11500 | 0.5304 | 0.1390 | | 0.6471 | 33.4262 | 12000 | 0.4658 | 0.1378 | | 0.6007 | 34.8189 | 12500 | 0.1836 | 0.1365 | | 0.6758 | 36.2117 | 13000 | 0.1798 | 0.1314 | | 0.6231 | 37.6045 | 13500 | 0.1793 | 0.1312 | | 0.6034 | 38.9972 | 14000 | 0.1739 | 0.1265 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
casque/Doggystyle-10
casque
2024-05-11T01:26:08Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-11T01:25:48Z
--- license: creativeml-openrail-m ---
yuweiiizz/whisper-small-taiwanese
yuweiiizz
2024-05-11T01:18:52Z
46
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "nan", "dataset:mozilla-foundation/common_voice_16_1", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-25T17:41:00Z
--- language: - nan license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_16_1 model-index: - name: Whisper Small Taiwanese results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Taiwanese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set: - Loss: 0.9011 - Cer: 50.3995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.6 ### Training results | Training Loss | Epoch | Step | Cer | Validation Loss | |:-------------:|:-----:|:----:|:-------:|:---------------:| | 1.1421 | 0.4 | 1000 | 61.1639 | 1.1692 | | 1.0556 | 0.8 | 2000 | 51.7749 | 1.0215 | | 0.7837 | 1.2 | 3000 | 54.1978 | 0.9572 | | 0.7332 | 1.6 | 4000 | 50.3966 | 0.9230 | | 0.6957 | 2.0 | 5000 | 50.5772 | 0.9064 | | 0.6211 | 2.4 | 6000 | 0.9177 | 49.8590 | | 0.5584 | 2.8 | 7000 | 0.8962 | 47.5366 | | 0.3952 | 3.2 | 8000 | 0.9025 | 48.2925 | | 0.4248 | 3.6 | 9000 | 0.9011 | 50.3995 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.1.2 - Datasets 2.19.1 - Tokenizers 0.19.1
RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf
RichardErkhov
2024-05-11T01:10:45Z
18
1
null
[ "gguf", "arxiv:2309.00071", "arxiv:2402.08268", "arxiv:2305.14233", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-10T22:41:29Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-8B-Instruct-262k - GGUF - Model creator: https://huggingface.co/gradientai/ - Original model: https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-8B-Instruct-262k.Q2_K.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3-8B-Instruct-262k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3-8B-Instruct-262k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama-3-8B-Instruct-262k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3-8B-Instruct-262k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3-8B-Instruct-262k.Q3_K.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3-8B-Instruct-262k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3-8B-Instruct-262k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3-8B-Instruct-262k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3-8B-Instruct-262k.Q4_0.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3-8B-Instruct-262k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3-8B-Instruct-262k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3-8B-Instruct-262k.Q4_K.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3-8B-Instruct-262k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3-8B-Instruct-262k.Q4_1.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3-8B-Instruct-262k.Q5_0.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3-8B-Instruct-262k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3-8B-Instruct-262k.Q5_K.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3-8B-Instruct-262k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3-8B-Instruct-262k.Q5_1.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3-8B-Instruct-262k.Q6_K.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q6_K.gguf) | Q6_K | 6.14GB | Original model description: --- language: - en pipeline_tag: text-generation tags: - meta - llama-3 license: llama3 --- <img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/> # Llama-3 8B Gradient Instruct 262k Join our custom agent and long context (262k-1M+) waitlist: https://forms.gle/L6TDY7dozx8TuoUv7 Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model, drop us a message at contact@gradient.ai. [Join our Discord](https://discord.com/invite/2QVy2qt2mf) This model extends LLama-3 8B's context length from 8k to > 160K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta. **Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644fac0ce1d7a97f3b653ab1/s9T8L-6Jh5fYH6Q_88r3g.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by a new data-driven RoPE theta optimization technique - Progressive training on increasing context lengths similar to the [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 262144 tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. **Quantized versions and GGUF** GGUF is available on on Crusoe's huggingface account. Check it out here: [crusoeai/Llama-3-8B-Instruct-262k-GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-262k-GGUF) **Exl2 quantized versions** Exl2 is available on Bullerwins's huggingface account. Check it out here: [8.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_8.0bpw) [6.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_6.0bpw) [5.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_5.0bpw) **Updated Exl2 quants for 5/3 improved weights** [8.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_8.0bpw) [6.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_6.0bpw) [5.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_5.0bpw) **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We also fine-tune on a chat dataset based on UltraChat [4], following a similar recipe for data augmentation to [2]. **Progressive Training Details:** | Parameter | 65K | 262K | |-----------------------------|----------------|------------| | Initialize From | LLaMA-3-8B-Inst| 65K | | Sequence Length | 2^16 | 2^18 | | RoPE theta | 15.3 M | 207.1 M | | Batch Size (Tokens / Step) | 2.097 M | 4.192 M | | Steps | 30 | 24 | | Total Tokens | 63 M | 101 M | | Learning Rate | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | | GPU Type | NVIDIA L40S | NVIDIA L40S| **Evaluation Details:** ``` EVAL_MAX_CONTEXT_LENGTH=320200 EVAL_MIN_CONTEXT_LENGTH=100 EVAL_CONTEXT_INTERVAL=16000 EVAL_DEPTH_INTERVAL=0.2 EVAL_NUM_SAMPLES=2 EVAL_RND_NUMBER_DIGITS=8 HAYSTACK: EVAL_GENERATOR_TOKENS=925000 ``` Haystack is "haystack 3", further detailed in this [blog post](https://gradient.ai/blog/the-haystack-matters-for-niah-evals). ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [contact@gradient.ai](mailto:contact@gradient.ai) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext [4] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
impossibleexchange/dip2
impossibleexchange
2024-05-11T01:05:38Z
78
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "mergekit", "lazymergekit", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T18:19:16Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit --- # TinyLlamaHerd-2x1.1B TinyLlamaHerd-2x1.1B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): ## 🧩 Configuration ```yaml base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T gate_mode: random dtype: bfloat16 experts: - source_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T positive_prompts: [""] - source_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T positive_prompts: [""] ```
BEE-spoke-data/mega-ar-350m-L3t-v0.08-ultraTBfw
BEE-spoke-data
2024-05-11T01:01:10Z
12
1
transformers
[ "transformers", "safetensors", "mega", "text-generation", "en", "dataset:BEE-spoke-data/UltraTextbooks-2.1-fw_mix", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-07T17:27:18Z
--- license: apache-2.0 metrics: - accuracy inference: parameters: max_new_tokens: 64 do_sample: true temperature: 0.7 repetition_penalty: 1.10 no_repeat_ngram_size: 6 eta_cutoff: 0.0008 renormalize_logits: true widget: - text: My name is El Microondas the Wise, and example_title: El Microondas - text: Kennesaw State University is a public example_title: Kennesaw State University - text: >- Bungie Studios is an American video game developer. They are most famous for developing the award winning Halo series of video games. They also made Destiny. The studio was founded example_title: Bungie - text: The Mona Lisa is a world-renowned painting created by example_title: Mona Lisa - text: >- The Harry Potter series, written by J.K. Rowling, begins with the book titled example_title: Harry Potter Series - text: >- Question: I have cities, but no houses. I have mountains, but no trees. I have water, but no fish. What am I? Answer: example_title: Riddle - text: The process of photosynthesis involves the conversion of example_title: Photosynthesis - text: >- Jane went to the store to buy some groceries. She picked up apples, oranges, and a loaf of bread. When she got home, she realized she forgot example_title: Story Continuation - text: >- Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph, and another train leaves Station B at 10:00 AM and travels at 80 mph, when will they meet if the distance between the stations is 300 miles? To determine example_title: Math Problem - text: In the context of computer programming, an algorithm is example_title: Algorithm Definition pipeline_tag: text-generation datasets: - BEE-spoke-data/UltraTextbooks-2.1-fw_mix language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mega-ar-350m-L3t-v0.08-ultraTBfw ## Model description This is a pretraining experiment most recently trained on the BEE-spoke-data/UltraTextbooks-2.1-fw_mix dataset. It achieves the following results on the evaluation set: - Loss: 2.0787 - Accuracy: 0.5746 - Num Input Tokens Seen: 3492282368 ## Quick eval Quick eval for: pszemraj/mega-ar-350m-L3t-v0.08-ultraTBfw hf (pretrained=pszemraj/mega-ar-350m-L3t-v0.08-ultraTBfw,trust_remote_code=True,dtype=float), gen_kwargs: (None), limit: 0.99999, num_fewshot: None, batch_size: 8 | Tasks |Version|Filter|n-shot| Metric | Value | |Stderr| |--------------|------:|------|-----:|----------|------:|---|-----:| |arc_easy | 1|none | 0|acc | 0.4246|± |0.0139| | | |none | 0|acc_norm | 0.4002|± |0.0138| |boolq | 2|none | 0|acc | 0.5762|± |0.0139| |lambada_openai| 1|none | 0|perplexity|76.7162|± |6.3531| | | |none | 0|acc | 0.2605|± |0.0123| |openbookqa | 1|none | 0|acc | 0.1840|± |0.0173| | | |none | 0|acc_norm | 0.2720|± |0.0199| |piqa | 1|none | 0|acc | 0.6377|± |0.0135| | | |none | 0|acc_norm | 0.6172|± |0.0137| |winogrande | 1|none | 0|acc | 0.5020|± |0.0141| ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 80085 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:| | 2.2572 | 0.0600 | 400 | 2.2462 | 0.5491 | 209715200 | | 2.2173 | 0.1201 | 800 | 2.1939 | 0.5564 | 419430400 | | 2.1992 | 0.1801 | 1200 | 2.1689 | 0.5604 | 629145600 | | 2.1543 | 0.2402 | 1600 | 2.1521 | 0.5632 | 838860800 | | 2.1532 | 0.3002 | 2000 | 2.1401 | 0.5650 | 1048576000 | | 2.1688 | 0.3603 | 2400 | 2.1307 | 0.5663 | 1258291200 | | 2.1443 | 0.4203 | 2800 | 2.1227 | 0.5676 | 1468006400 | | 2.1105 | 0.4804 | 3200 | 2.1158 | 0.5689 | 1677721600 | | 2.1045 | 0.5404 | 3600 | 2.1090 | 0.5700 | 1887436800 | | 2.1181 | 0.6004 | 4000 | 2.1045 | 0.5708 | 2097152000 | | 2.127 | 0.6605 | 4400 | 2.0994 | 0.5716 | 2306867200 | | 2.1265 | 0.7205 | 4800 | 2.0958 | 0.5719 | 2516582400 | | 2.0951 | 0.7806 | 5200 | 2.0909 | 0.5728 | 2726297600 | | 2.0951 | 0.8406 | 5600 | 2.0876 | 0.5733 | 2936012800 | | 2.1335 | 0.9007 | 6000 | 2.0838 | 0.5739 | 3145728000 | | 2.0731 | 0.9607 | 6400 | 2.0802 | 0.5744 | 3355443200 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
fine-tuned/jina-embeddings-v2-base-en-5102024-h7o7-webapp
fine-tuned
2024-05-11T00:57:31Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Matchmaking", "Relationships", "Love", "Dating", "Marriage", "custom_code", "en", "dataset:fine-tuned/jina-embeddings-v2-base-en-5102024-h7o7-webapp", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-11T00:57:16Z
--- license: apache-2.0 datasets: - fine-tuned/jina-embeddings-v2-base-en-5102024-h7o7-webapp - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Matchmaking - Relationships - Love - Dating - Marriage --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: professional matchmaking services ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jina-embeddings-v2-base-en-5102024-h7o7-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
EthanRhys/Spitz-Current
EthanRhys
2024-05-11T00:53:51Z
0
0
null
[ "license:openrail++", "region:us" ]
null
2024-05-11T00:50:55Z
--- license: openrail++ ---
nbalepur/LLama-2-70b-Mnemonic-Tokenizer
nbalepur
2024-05-11T00:51:13Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T00:51:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Holarissun/RM-TLDR_human_loraR64_-1_gemma7b_lr1.41e-05_bs2_g4
Holarissun
2024-05-11T00:43:16Z
0
0
peft
[ "peft", "safetensors", "trl", "reward-trainer", "generated_from_trainer", "base_model:google/gemma-7b", "base_model:adapter:google/gemma-7b", "license:gemma", "region:us" ]
null
2024-05-11T00:43:11Z
--- license: gemma library_name: peft tags: - trl - reward-trainer - generated_from_trainer base_model: google/gemma-7b metrics: - accuracy model-index: - name: RM-TLDR_human_loraR64_-1_gemma7b_lr1.41e-05_bs2_g4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RM-TLDR_human_loraR64_-1_gemma7b_lr1.41e-05_bs2_g4 This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5504 - Accuracy: 0.7435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4935 | 1.0 | 11168 | 0.5452 | 0.7435 | | 0.4459 | 2.0 | 22336 | 0.5504 | 0.7435 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
acrobatlm/e2erag-arctic-m
acrobatlm
2024-05-11T00:39:36Z
84
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "text-embeddings-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
feature-extraction
2024-05-10T00:06:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
realtreetune/rho-interpreter-1b-sft-MATH
realtreetune
2024-05-11T00:34:13Z
140
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T00:32:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lucyknada/Edgerunners_meta-llama-3-8b-instruct-hf-ortho-baukit-10fail-1000total-GGUF
lucyknada
2024-05-11T00:33:45Z
3
0
null
[ "gguf", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-10T17:48:22Z
--- license: cc-by-nc-4.0 --- test results: works great just like the smaller models this version had only 10 refusals out of 1000 ortho-tests. --- wassname (updated baukit) implementation of the paper: https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction applied to llama3 8b instruct 1. The Model is meant purely for alignment research and exploration of alignmentforum theory 2. The Model is provided ""AS IS"" and ""AS AVAILABLE"" without warranty of any kind, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, title, or non-infringement. 3. The Provider disclaims all liability for any damages or losses resulting from the use or misuse of the Model, including but not limited to any damages or losses arising from the use of the Model for purposes other than those intended by the Provider. 4. The Provider does not endorse or condone the use of the Model for any purpose that violates applicable laws, regulations, or ethical standards. 5. The Provider does not warrant that the Model will meet your specific requirements or that it will be error-free or that it will function without interruption. 6. You assume all risks associated with the use of the Model, including but not limited to any loss of data, loss of business, or damage to your reputation.
ikura31/mistral_docs_sum_p1_full
ikura31
2024-05-11T00:32:16Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-08T16:01:06Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.1 model-index: - name: mistral_docs_sum_p1_full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_docs_sum_p1_full This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5829 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1167 | 0.0277 | 200 | 2.1333 | | 2.3428 | 0.0553 | 400 | 1.6966 | | 1.3784 | 0.0830 | 600 | 1.4972 | | 1.456 | 0.1107 | 800 | 1.3942 | | 1.3227 | 0.1383 | 1000 | 1.3084 | | 1.2535 | 0.1660 | 1200 | 1.2001 | | 1.0612 | 0.1937 | 1400 | 1.0451 | | 0.8815 | 0.2213 | 1600 | 0.9632 | | 0.8971 | 0.2490 | 1800 | 0.9132 | | 0.7908 | 0.2767 | 2000 | 0.8712 | | 0.7549 | 0.3043 | 2200 | 0.8309 | | 0.8099 | 0.3320 | 2400 | 0.8058 | | 0.6891 | 0.3597 | 2600 | 0.7879 | | 0.5204 | 0.3873 | 2800 | 0.7684 | | 0.6249 | 0.4150 | 3000 | 0.7515 | | 0.6764 | 0.4427 | 3200 | 0.7342 | | 0.6996 | 0.4703 | 3400 | 0.7214 | | 0.6371 | 0.4980 | 3600 | 0.7084 | | 0.6694 | 0.5257 | 3800 | 0.6951 | | 0.7048 | 0.5533 | 4000 | 0.6845 | | 0.7265 | 0.5810 | 4200 | 0.6778 | | 0.5663 | 0.6087 | 4400 | 0.6657 | | 0.6222 | 0.6363 | 4600 | 0.6595 | | 0.6463 | 0.6640 | 4800 | 0.6488 | | 0.5754 | 0.6917 | 5000 | 0.6410 | | 0.6208 | 0.7193 | 5200 | 0.6363 | | 0.5613 | 0.7470 | 5400 | 0.6275 | | 0.6316 | 0.7747 | 5600 | 0.6227 | | 0.6564 | 0.8023 | 5800 | 0.6159 | | 0.633 | 0.8300 | 6000 | 0.6077 | | 0.5268 | 0.8577 | 6200 | 0.6022 | | 0.4166 | 0.8853 | 6400 | 0.5978 | | 0.6539 | 0.9130 | 6600 | 0.5926 | | 0.5695 | 0.9407 | 6800 | 0.5875 | | 0.6358 | 0.9683 | 7000 | 0.5845 | | 0.5318 | 0.9960 | 7200 | 0.5829 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
Holarissun/dpo_harmlessharmless_contrast_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05
Holarissun
2024-05-11T00:22:59Z
1
0
peft
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "license:mit", "region:us" ]
null
2024-05-11T00:22:57Z
--- license: mit library_name: peft tags: - trl - dpo - generated_from_trainer base_model: gpt2 model-index: - name: dpo_harmlessharmless_contrast_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dpo_harmlessharmless_contrast_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-05 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 15 - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
overloud/celinho
overloud
2024-05-11T00:21:33Z
0
0
null
[ "license:cc-by-nc-2.0", "region:us" ]
null
2024-05-11T00:20:12Z
--- license: cc-by-nc-2.0 ---
HarshilPatel1905/flant5_peft_model_emotion_detection
HarshilPatel1905
2024-05-11T00:17:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-11T00:17:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf
RichardErkhov
2024-05-11T00:17:18Z
37
0
null
[ "gguf", "arxiv:2310.04799", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-10T21:41:58Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-Open-Ko-8B - GGUF - Model creator: https://huggingface.co/beomi/ - Original model: https://huggingface.co/beomi/Llama-3-Open-Ko-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-Open-Ko-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3-Open-Ko-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3-Open-Ko-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama-3-Open-Ko-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3-Open-Ko-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3-Open-Ko-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3-Open-Ko-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3-Open-Ko-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3-Open-Ko-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3-Open-Ko-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3-Open-Ko-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3-Open-Ko-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3-Open-Ko-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3-Open-Ko-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3-Open-Ko-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3-Open-Ko-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3-Open-Ko-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3-Open-Ko-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3-Open-Ko-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3-Open-Ko-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3-Open-Ko-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_Llama-3-Open-Ko-8B-gguf/blob/main/Llama-3-Open-Ko-8B.Q6_K.gguf) | Q6_K | 6.14GB | Original model description: --- language: - en - ko license: other tags: - facebook - meta - pytorch - llama - llama-3 - llama-3-ko pipeline_tag: text-generation license_name: llama3 license_link: LICENSE --- > Update @ 2024.05.01: Pre-Release [Llama-3-KoEn-8B](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) model & [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview) > Update @ 2024.04.24: Release Llama-3-Open-Ko-8B model & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) ## Model Details **Llama-3-Open-Ko-8B** Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B. This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts. With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer). The train was done on TPUv5e-256, with the warm support from TRC program by Google. **Note for [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)** With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview). Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models. **Meta Llama-3** Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Junbum Lee (Beomi) **Variations** Llama-3-Open-Ko comes in one size — 8B. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama-3-Open-Ko </td> <td rowspan="2" >Same as *Open-Solar-Ko Dataset </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >17.7B+ </td> <td>Jun, 2023 </td> </tr> </table> *You can find dataset list here: https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B/tree/main/corpus **Model Release Date** 2024.04.24. **Status** This is a static model trained on an offline dataset. **License** Llama3 License: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use TBD ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions **Llama-3-Open-Ko** ``` @article{llama3openko, title={Llama-3-Open-Ko}, author={L, Junbum}, year={2024}, url={https://huggingface.co/beomi/Llama-3-Open-Ko-8B} } ``` **Original Llama-3** ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ```
SLDGroup/GreedyViG
SLDGroup
2024-05-11T00:12:04Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-11T00:03:02Z
--- license: apache-2.0 ---
adarshheg/merged-adapter-test
adarshheg
2024-05-11T00:05:17Z
139
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T00:04:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Acopa/sdxl_controlnet_deep_fash_test
Acopa
2024-05-11T00:04:22Z
1
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-10T22:26:32Z
--- license: openrail++ library_name: diffusers tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet - diffusers-training base_model: stabilityai/stable-diffusion-xl-base-1.0 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-Acopa/sdxl_controlnet These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
ayaaa123/my_awesome_clm_model
ayaaa123
2024-05-10T23:58:13Z
62
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T23:57:10Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_keras_callback model-index: - name: ayaaa123/my_awesome_clm_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ayaaa123/my_awesome_clm_model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.6000 - Validation Loss: 4.0725 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.6000 | 4.0725 | 0 | ### Framework versions - Transformers 4.40.2 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
Acopa/sdxl_turbo_lora_test
Acopa
2024-05-10T23:52:26Z
3
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:stabilityai/sdxl-turbo", "base_model:adapter:stabilityai/sdxl-turbo", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-03T15:06:21Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - diffusers-training - lora base_model: stabilityai/sdxl-turbo inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - Acopa/sdxl_turbo_lora_test These are LoRA adaption weights for stabilityai/sdxl-turbo. The weights were fine-tuned on the lirus18/deepfashion_with_captions dataset. You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Passion1987/corgy_car_LoRA
Passion1987
2024-05-10T23:51:44Z
1
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-10T19:14:35Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - diffusers-training - diffusers - dora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - dora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of car widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Passion1987/corgy_car_LoRA <Gallery /> ## Model description These are Passion1987/corgy_car_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of car to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](Passion1987/corgy_car_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
upendrab/llama-3-sqlcoder-8b-Q4_K_M-GGUF
upendrab
2024-05-10T23:48:44Z
11
3
null
[ "gguf", "code", "llama-cpp", "gguf-my-repo", "text-generation", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-10T23:48:27Z
--- license: cc-by-sa-4.0 tags: - code - llama-cpp - gguf-my-repo metrics: - accuracy pipeline_tag: text-generation --- # upendrab/llama-3-sqlcoder-8b-Q4_K_M-GGUF This model was converted to GGUF format from [`defog/llama-3-sqlcoder-8b`](https://huggingface.co/defog/llama-3-sqlcoder-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/defog/llama-3-sqlcoder-8b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo upendrab/llama-3-sqlcoder-8b-Q4_K_M-GGUF --model llama-3-sqlcoder-8b.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo upendrab/llama-3-sqlcoder-8b-Q4_K_M-GGUF --model llama-3-sqlcoder-8b.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-sqlcoder-8b.Q4_K_M.gguf -n 128 ```
FireAndBlood09/annalisa
FireAndBlood09
2024-05-10T23:45:36Z
0
0
adapter-transformers
[ "adapter-transformers", "music", "it", "dataset:HuggingFaceFW/fineweb", "license:openrail", "region:us" ]
null
2024-05-10T23:41:58Z
--- license: openrail datasets: - HuggingFaceFW/fineweb language: - it metrics: - accuracy library_name: adapter-transformers tags: - music ---
strilian/vit-base-oxford-iiit-pets
strilian
2024-05-10T23:31:32Z
216
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-10T22:45:24Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-oxford-iiit-pets results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.4458 - Accuracy: 0.9039 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.609 | 1.0 | 370 | 0.5258 | 0.8336 | | 0.2586 | 2.0 | 740 | 0.3890 | 0.8796 | | 0.1162 | 3.0 | 1110 | 0.5177 | 0.8647 | | 0.0193 | 4.0 | 1480 | 0.3459 | 0.9147 | | 0.0028 | 5.0 | 1850 | 0.3286 | 0.9188 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
saaduddinM/Llama7B_mul_small
saaduddinM
2024-05-10T23:30:25Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-10T23:30:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Noxy31/IreliaV2
Noxy31
2024-05-10T23:26:07Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T19:58:20Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlx-community/bert-base-multilingual-uncased
mlx-community
2024-05-10T23:14:31Z
0
0
mlx
[ "mlx", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "license:apache-2.0", "region:us" ]
null
2024-05-10T23:12:09Z
--- language: - multilingual - af - sq - ar - an - hy - ast - az - ba - eu - bar - be - bn - inc - bs - br - bg - my - ca - ceb - ce - zh - cv - hr - cs - da - nl - en - et - fi - fr - gl - ka - de - el - gu - ht - he - hi - hu - is - io - id - ga - it - ja - jv - kn - kk - ky - ko - la - lv - lt - roa - nds - lm - mk - mg - ms - ml - mr - min - ne - new - nb - nn - oc - fa - pms - pl - pt - pa - ro - ru - sco - sr - hr - scn - sk - sl - aze - es - su - sw - sv - tl - tg - ta - tt - te - tr - uk - ud - uz - vi - vo - war - cy - fry - pnb - yo license: apache-2.0 datasets: - wikipedia library_name: mlx --- [Multilingual BERT](https://huggingface.co/google-bert/bert-base-multilingual-uncased), converted for use with MLX.
pk18354n/opt-6.7b-lora
pk18354n
2024-05-10T23:11:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-10T23:11:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
itayl/Hebrew-Mistral-7B-200K-HeQ600-Q5_K_M-GGUF
itayl
2024-05-10T23:08:43Z
1
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "llama-cpp", "gguf-my-repo", "en", "base_model:yam-peleg/Hebrew-Mistral-7B-200K", "base_model:quantized:yam-peleg/Hebrew-Mistral-7B-200K", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-10T23:08:28Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft - llama-cpp - gguf-my-repo base_model: yam-peleg/Hebrew-Mistral-7B-200K --- # itayl/Hebrew-Mistral-7B-200K-HeQ600-Q5_K_M-GGUF This model was converted to GGUF format from [`itayl/Hebrew-Mistral-7B-200K-HeQ600`](https://huggingface.co/itayl/Hebrew-Mistral-7B-200K-HeQ600) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/itayl/Hebrew-Mistral-7B-200K-HeQ600) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo itayl/Hebrew-Mistral-7B-200K-HeQ600-Q5_K_M-GGUF --model hebrew-mistral-7b-200k-heq600.Q5_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo itayl/Hebrew-Mistral-7B-200K-HeQ600-Q5_K_M-GGUF --model hebrew-mistral-7b-200k-heq600.Q5_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m hebrew-mistral-7b-200k-heq600.Q5_K_M.gguf -n 128 ```
BlackRose104/Undress.ckpt
BlackRose104
2024-05-10T23:06:26Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-10T22:50:34Z
--- license: creativeml-openrail-m --- Im not sure who the creator is but its an awesome model. If anyone can turn this into a .safetensor file and upload to here, it would be awesome.
Xerror/Mimi-Chatbot-0.1
Xerror
2024-05-10T22:57:08Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:finetune:meta-llama/Meta-Llama-3-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-10T22:56:25Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: meta-llama/Meta-Llama-3-8B --- # Uploaded model - **Developed by:** Xerror - **License:** apache-2.0 - **Finetuned from model :** meta-llama/Meta-Llama-3-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
alelov/test-model-label2-MiniLMVERSION2
alelov
2024-05-10T22:56:15Z
10
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "model-index", "region:us" ]
text-classification
2024-05-10T22:56:01Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 metrics: - accuracy widget: - text: Não apenas isso. A bola de neve do endividamento - text: ' Bueno, yo lo que espero es que se traten con respeto, que se quieran. ' - text: ' Sí, pues pedirle a María Luisa que le dé seguimiento y que siga atendiendo las demandas de los ciudadanos de Vallarta, si te parece. Ya ella seguramente nos está viendo y está tomando nota para darle continuidad a las demandas de ambientalistas de Vallarta. ' - text: A confiança na economia despertou o apetite pelo risco, criando instrumentos financeiros indispensáveis à captação de novos recursos para a expansão produtiva. - text: " A ver, pon la carta de Elba Esther. Es que luego la borró. Fue en mayo\ \ del 23, 2 de mayo: ‘Ahí le espero con el Ejército —supuestamente esto\ \ es lo que le dijo Calderón a la maestra Elba Esther, ahí la espero con el\ \ Ejército— esa fue la respuesta del entonces presidente de México, Felipe\ \ Calderón, cuando le dije —según la maestra— que las y los maestros de\ \ México nos oponíamos a que Miguel Ã\x81ngel Yunes continuara como titular\ \ del Issste, dadas las malversaciones de fondos financieros que con tanto trabajo\ \ las los trabajadores al servicio del Estado logramos con la reforma a dicha\ \ institución. ‘Cuando me comentó que Yunes estaba haciendo bien su trabajo,\ \ no me dejó más alternativa —dice la maestra— que advertirle que tomaríamos\ \ las instalaciones del Issste y justo esa fue su respuesta: Ahí la espero con\ \ el Ejército. Esto sucedió en el marco de un evento público en una escuela\ \ secundaria técnica de la ahora Ciudad de México. Ante su respuesta, me levanté\ \ y me retiré. ‘Recordemos que la elección y remoción del director del Issste\ \ compete única y exclusivamente al titular del Ejecutivo federal y no a una\ \ servidora.’ Aquí me está contestando a mí, porque yo dije que a ella le\ \ habían entregado por ayudar en el fraude, que no me diría la maestra que no\ \ ayudó en el fraude del 2006, y a cambio yo sostengo que le entregaron el Issste,\ \ la Subsecretaría de Educación Pública y la Lotería Nacional. ‘Por ello,\ \ en relación a las declaraciones hechas por el presidente Andrés Manuel López\ \ Obrador el pasado 29 de abril del presente año, sobre mi persona y la gestión\ \ del señor Miguel Ã\x81ngel Yunes al frente del Issste, le digo categóricamente\ \ que no participé el acto ilícito alguno, como me acusa desde su tribuna’.\ \ Yo no estoy acusando más que de haberse aliado con Calderón y ayudarle en\ \ el fraude electoral. ‘Siempre me he conducido conforme a derecho, de respeto\ \ a las instituciones de este país y, desde luego, a la investidura presidencial.\ \ Por ello, señor presidente, basta de falsas acusaciones a mi persona’. No\ \ es nada personal, maestra, es que estamos viviendo un momento importantísimo\ \ de transformación. Entonces, como el compañero que viene a hacernos preguntas\ \ sobre salud, ayuda a recordar, porque es como si padecieran amnesia, ya se olvidó\ \ cómo era. Y antes esto no lo tocaban, era silencio, como vasallos, obedecer\ \ y callar, siempre y cuando hubiese dinero de por medio, porque lo que no suena\ \ lógico suena metálico. Entonces, hay que ir aclarando todo, seguir purificando\ \ la vida pública del país y por eso son muy buenas estas mañaneras. Pero,\ \ bueno, eso es lo que queríamos decir. ¿Qué se está haciendo? Procurar, ya\ \ es un compromiso, garantizar el derecho a la salud. Y vaya que ha costado, por\ \ estos intereses. Imagínense, no se podían comprar medicinas en el extranjero\ \ porque la ley lo prohibía, lo impedía; tuvimos que reformar la ley. ¿Y quiénes\ \ votaron en contra de que se pudiera comprar la medicina en el extranjero? El\ \ bloque conservador. ¿Qué son entonces? Representantes de minorías, no representantes\ \ del pueblo, esa es nuestra diferencia de fondo. No es nada personal, pero sí\ \ es importante el darle su sitio que le corresponde a lo público. República\ \ es, res publica, cosa pública. Si vivimos en una república, tenemos que pensar\ \ en eso, en lo público. Eso ya se había olvidado. Entonces, vamos a continuar\ \ con lo mismo y va adelante todo el plan de transformación. El viernes vamos\ \ a informar sobre salud y luego vamos a informar en específico sobre el Issste,\ \ porque ya llevamos… ¿Cuánto tiempo llevamos? " pipeline_tag: text-classification inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.7889908256880734 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 4 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | <ul><li>' No, si te estamos diciendo todo, nada más pregúntanos; ahora sí que como diría el clásico, tú pregúntame. '</li><li>' Pues nos vamos a poner de acuerdo, pero sí vamos a acatar ese resolutivo. '</li><li>' \xa0Lo más pronto posible, mañana. '</li></ul> | | 1 | <ul><li>' Mujer y luego el compañero. '</li><li>' Bueno, que ayer fue un dÃ\xada muy importante, histórico, porque se ofreció perdón a los pueblos yaquis, que han sido vÃ\xadctimas de la represión durante mucho tiempo. Sin duda, es la etnia, es la cultura más reprimida en la historia, los yaquis. Y durante el porfiriato padecieron como una segunda conquista, fue una represión brutal. Se habla, ayer lo mencionaba, lo declara un gobernador de ese entonces de Sonora, que en la guerra contra los yaquis asesinaron a 15 mil yaquis, más los deportados; separaban a hombres y mujeres, los mandaban al sureste a trabajar como esclavos a Valle Nacional a la producción del tabaco, a las haciendas azucareras, al henequén en Yucatán. Una situación bárbara que avergüenza. Como a los mayas y a los mayos, y a los seris y a los totonacos, y a todos los pueblos originarios que padecieron de la represión durante la Colonia, y en el México independiente y en especial durante el porfiriato para quitarles sus tierras, sus bosques, sus aguas, era muy común en ese entonces que en el discurso oficial se hablara de que eran unos salvajes, un poco lo que todavÃ\xada se escucha, de que los invasores europeos vinieron a civilizarnos. Por eso este año de conmemoraciones el tÃ\xadtulo de todos los eventos, el distintivo es ‘La grandeza cultural de México’, porque esta es una gran nación, donde florecieron civilizaciones extraordinarias, espléndidas: los teotihuacanos, los toltecas, los olmecas, la cultura madre. Estamos hablando de que México se fundó hace miles de años. Entonces, ese argumento, que tiene que ver mucho con la opresión y con quien invade tierras o despoja, se usa para cometer todas esas injusticias, hay que descalificar al vencido, hay que justificar la represión. Entonces, eran bárbaros, salvajes.Â\xa0 Todos los generales del porfiriato más sanguinarios se formaron reprimiendo a indÃ\xadgenas. Huerta estuvo allá en Quintana Roo reprimiendo pueblos indÃ\xadgenas, Bernardo Reyes estuvo reprimiendo a los yaquis, y todos asÃ\xad se formaron. En ese entonces el Ejército adquirió armamento de lo más sofisticado, fÃ\xadjense, armas usadas por un ejército formado también por indÃ\xadgenas para reprimir a sus hermanos indÃ\xadgenas. Entonces, por eso fue el acto de ayer con los yaquis. Y no sólo fue ir a ofrecerles perdón y reconocerles por su resistencia, sino fue un acto para reafirmar nuestros compromisos de justicia, porque se está llevando a cabo un plan de justicia a los pueblos yaquis, que tiene tres vertientes: Primero, es el que tengan la tierra que les dotó el general Lázaro Cárdenas. Hubo resoluciones, acuerdos desde la época del general Cárdenas, pero no se cumplieron o empezaron a revertir esos acuerdos y se les quitaron las tierras, no todas, pero no tienen las tierras que se entregaron por resolución presidencial. Entonces, ayer ya entregamos dos mil 900 hectáreas y tenemos el compromiso de entregarles 20 mil. Lo segundo es el agua, porque también en la resolución del general Cárdenas se establecÃ\xada que las aguas del rÃ\xado Yaqui tenÃ\xadan que ser para los yaquis en un 50 por ciento. Está establecido y esto no sucede; al contrario, los pueblos yaquis no tienen agua ni siquiera para su uso doméstico. Entonces, ahora el compromiso es darles el agua que requieren los pueblos y se está creando con infraestructura moderna, canales, un nuevo distrito de riego que van a manejar ellos, esto es muy importante, por primera vez los pueblos originarios van a manejar un distrito de riego en el paÃ\xads, y van a ser los yaquis, y estamos contemplando una inversión total de cerca de 12 mil millones de pesos. Y tenemos trabajando servidores públicos, a los que les agradezco mucho, porque se han ido allá, Aarón Mastache, que está a cargo del plan hidráulico, porque tenemos 27 meses nada más, tenemos que terminar a finales del 23 para hacer los canales y poder regar hasta 60 mil hectáreas. Lo que el ingeniero Cárdenas planteó ayer es que se hizo en el gobierno anterior pues un acueducto hacia Hermosillo y esto generó una oposición de los pueblos, por lo mismo. Pero nosotros consideramos que con esta decisión que estamos tomando de entregar agua suficiente a los pueblos se puede resolver esta situación, porque también ahora cancelar eso nos va a resultar problemático, ya hay una inversión; además, es el agua para Hermosillo, que también se requiere.Â\xa0 Entonces, pensamos que nos va a alcanzar, porque también hay mucho desperdicio de agua, porque como no se invierte en infraestructura hidráulica, no se están recubriendo los canales para que el agua no se desperdicie. Y ahora la inversión que vamos a hacer va en ese sentido, o sea, no sólo de extraer más agua, de obtener más agua, que haya más abasto de agua, sino también vamos a rehabilitar todos los sistemas de riego, esto nos va a ayudar a resolver el problema.Â\xa0 Ah, y lo tercero es un plan integral para el bienestar, que tiene que ver con vivienda, que tiene que ver con atención médica, escuelas, se está creando la universidad de los pueblos yaquis, una nueva universidad pública, también estamos llegando a acuerdos. Todo esto consultado, aprobado por los gobernadores yaquis, no se impone nada, todo lo que hemos avanzado ha sido con el acuerdo de ellos. Se han llevado a cabo más de 100 asambleas, todo se les presenta, ellos son los que autorizan. Y ayer pues nos reunimos para firmar nuevos acuerdos y fue un hecho histórico muy importante. Y le agradecemos al ingeniero Cárdenas que nos acompañó, y fueron también representantes de otras etnias y de otras culturas del paÃ\xads. '</li><li>' Bueno, en cuanto al documental, no lo he visto, pero si existe una denuncia estoy seguro que la FiscalÃ\xada de la Ciudad de México le va a dar seguimiento, y lo he dicho aquÃ\xad en otras ocasiones, le tengo confianza a la fiscal, a la procuradora Ernestina Godoy. Es una mujer Ã\xadntegra, honesta, incapaz de encubrir o de ser cómplice de una injusticia. Entonces, es cosa de recurrir a ella. Acerca de los medios, sin enfadarnos, tenemos posturas distintas. Ellos formaron parte de un régimen que, yo sostengo, se dedicó a saquear a México. El distintivo de lo que llamaron neoliberalismo, que yo llamo neoporfirismo, fue el saqueo, el transferir los bienes públicos a particulares, los bienes del pueblo, de la nación a un grupo que le llamaron en su momento ‘grupo compacto’. Es una nueva oligarquÃ\xada que se creó son Salinas de Gortari. Entonces, a partir de entonces, para poder saquear a sus anchas, pues se creó una red de componendas y complicidades en donde no sólo eran los traficantes de influencia los beneficiados, sino eran dueños de medios de comunicación, periodistas. No todos, desde luego, ni todos los medios de comunicación, pero pues es de dominio público qué medios de comunicación, qué periodistas guardaron silencio cuando estaba saqueando a México como nunca en la historia.Â\xa0 Nunca hablaron de corrupción. Además, encubrieron delitos que se cometÃ\xadan como lo que significó la guerra contra el narcotráfico. Es también sabido que Calderón da a conocer que va a iniciar esa guerra y se atreve a decir que va a costar vidas, pero que no importa, y habla incluso de vidas de civiles, de inocentes. Y como fue una guerra con altos niveles de letalidad donde, si habÃ\xada un enfrentamiento, lo hemos visto muchas veces, entre el Ejército, las Fuerzas Armadas y la delincuencia, a los delincuentes, integrantes de bandas que quedaban heridos los remataban, eran masacres. Y estos medios de información, los que están ahora en contra de nosotros, hablando de que no se permite la libertad de expresión, fueron convocados por Calderón y les pidió que guardaran silencio sobre estas atrocidades. Ahora que se está llevando a cabo un cambio, una transformación, pues no sólo están ofendidos los que gozaban de privilegios en lo económico, los traficantes de influencia, sino también los dueños de medios de comunicación y hasta periodistas, desde luego intelectuales orgánicos y hasta gente de la academia, porque asÃ\xad es un proceso de transformación. Entonces, no hay que asombrarse, son dos proyectos distintos, contrapuestos de nación. Antes era saquear y repartirse el botÃ\xadn, y a unos les tocaba más, a otros menos, a quien no le tocaba nada era al pueblo raso, porque esta polÃ\xadtica consiste en que todos los intereses cuentan, menos el interés del pueblo. Entonces, ¿qué significa la transformación?, hacer a un lado a los privilegiados, acabar con la corrupción y todo lo ahorrado, todo lo que se robaban entregarlo a los más necesitados, a la mayorÃ\xada del pueblo, que es lo que estamos haciendo. Porque lo he dicho en otras ocasiones, no se conformaban con quedarse con una empresa, con un banco, con una mina, además saqueaban también el presupuesto público.Â\xa0 Como dicen los abogados, aceptando sin conceder, de que se quedaran con los bienes, con el sofisma, la mentira de que es mejor el sector privado que el sector público y que no hace falta el Estado, que basta con el mercado y que hay que diluir al Estado, desaparecerlo; que tampoco es del todo cierto porque es muy hipócrita ese discurso o esa pedacerÃ\xada de discurso, porque cuando necesitan al Estado lo usan, ¿para qué?, para rescatar a los bancos, para beneficiarse como lo hicieron con el Fobaproa. Si no se necesitara al Estado, ¿para que se rescató a empresarios y a banqueros?, ¿por qué no se dejó al mercado? Si en el libre mercado un comerciante, como sucedió y como sucede, pequeño le va mal, ni modo que el Estado lo rescate. Pero aquÃ\xad sÃ\xad lo consideraron válido rescatar a los de arriba y convertir las deudas privadas de unos cuantos en deuda pública con el famoso Fobaproa, que todavÃ\xada está ahÃ\xad como una deuda onerosa. Entonces, son dos proyectos distintos, contrapuestos de nación. Y baste… El otro dÃ\xada estaba yo viendo, yo creo que te mandé una foto de cuando Calderón llama… ¿O fue Peña? Creo que Calderón, porque fue cuando estaba la guerra. A ver, busquen eso, es una foto en donde convoca a todos los medios de información para guardar silencio. Creo que fue en el 11, que fue el año de más enfrentamientos y del mayor Ã\xadndice de letalidad, 2011, van a haber ahorita la foto. Son exactamente los que ahora están en contra de nosotros. Es normal y no hay ningún problema.Â\xa0 Tú me preguntas: ‘¿Cómo enfrentar eso?’ Pues contextualizando. Repito, no hay texto sin contexto, nada más dando los antecedentes, informando y ya la gente va a tener una idea de las cosas. Antes era muy difÃ\xadcil porque pues eran todos ellos y nosotros no tenÃ\xadamos manera. Ah sÃ\xad, es Calderón: ‘Celebra Calderón el acuerdo para la cobertura informativa de la violencia’. Pero ¿no puedes abrir la foto? SÃ\xad. Y mira, aunque no les vaya, no les guste, tú que los conoces más, ve diciendo quiénes son los que están ahÃ\xad. O sea, porque, ¿quiénes estaban?, ¿quiénes asistieron?, es un pacto de silencio. A lo mejor sÃ\xad se distinguen. '</li></ul> | | 2 | <ul><li>' Muy bien. Vamos a pedirle a la maestra Delfina Gómez Ã\x81lvarez, que ella, que es la secretaria de Educación Pública y que además es muy buena servidora pública, muy buena persona, de muy buenos sentimientos, que ella atienda lo que estás planteando. '</li><li>'Â\xa0Esto no es un asunto de género. Y todavÃ\xada, cuando votan en contra, invitan a la secretaria a desayunar y le dicen: ‘DeberÃ\xada de estar alegre, si usted va a manejar la Guardia’. FÃ\xadjense la perversidad: ‘¿Cómo se va usted a inconformar, si usted va a ser la que va a manejar la Guardia?’ Como si fuese Rosa Icela una ambiciosa vulgar, que estuviese pensando en el mando y en el cargo, y ser la jefa de 136 elementos de la Guardia Nacional. Pero la perversidad de los ministros. Y: ‘A ver —como se dieron cuenta de lo que habÃ\xadan hecho—, a ver, vamos a ponernos de acuerdo para que no entre en vigor ahora en enero, que entre en vigor después’. Ni un arreglo en lo oscurito, nada; ya aprobaron que no, no, y vamos de nuevo con una iniciativa. Pero les pongo este ejemplo. Ahora, ¿qué es lo que quieren entonces los conservadores y sus lacayos? ¿Que la Guardia Nacional quede en manos de alguien parecido a…? ¿Cómo se llama el que está detenido? '</li><li>'Â\xa0Es que hay muchas denuncias y hay que demostrar que tengan sustento, no se trata de denunciar por denunciar. Y yo apoyo a Ana Guevara, la considero una buena servidora pública, promotora del deporte. Y eso es lo que puedo comentarle, aunque respeto a los crÃ\xadticos y a quienes no opinan lo mismo. '</li></ul> | | 3 | <ul><li>' Sí, también las mujeres, mujeres, y hombres también, los que no quieran asistir ese día, que quieran participar, también. Es una cuestión de voluntad de cada quien. La única cosa que nosotros recomendamos respetuosamente es que se manifiesten todas las expresiones, todas las corrientes del pensamiento, libra manifestación de ideas, garantizar plenamente o ejercer el derecho a disentir procurando que no haya violencia, eso es lo único y es una recomendación muy respetuosa, que no haya violencia, que se opte por la no violencia, la resistencia civil pacífica siempre, no a la violencia, es lo único. Y cuando digo no violencia no estoy pensando, la verdad, en que el Estado… porque eso ya se terminó, no hay un Estado represor, aquí no hay problema. No, estoy pensando en que haya grupos antagónicos en la calle y pueda haber enfrentamientos, es lo único, porque nosotros no vamos a utilizar la fuerza en nada. Vamos a procurar estar nada más pendiente para auxiliar, para ayudar en todo lo que podamos, para proteger en todo lo que podamos, pero libertades plenas. '</li><li>' Azucena Uréstegui, transmitiendo en vivo. Bueno, cuando se trata de una movilización asÃ\xad transmiten en vivo todos los medios, es como cadena nacional. Pero hasta chistoso porque se quejaba de que se estaba amurallando la ciudad —¿no tienes ahÃ\xad su texto?— y luego le contesta Pedro Miguel, que no se estaba dando cuenta que ella estaba transmitiendo desde Milenio, que estaba amurallado, porque pidieron protección. Pero es claro que la señora no nos quiere, entonces cualquier cosa la utiliza. Por ejemplo, miren, esto es lo que padecemos diario y es general. A ver si le pones. Pero es como López-Dóriga, como Ciro, como Loret de Mola, el Reforma, El Universal, etcétera, etcétera, más los periódicos de Estados Unidos, el Washington Post, el New York Times, el Wall Street Journal, el Financial Times, El PaÃ\xads. De veras, es una de rigor periodÃ\xadstico, están mostrando que son medios que defienden a grupos de intereses creados y que están en contra de gobiernos que buscan combatir la corrupción y ayudar a los pobres. '</li><li>' Llevamos años defendiendo a las mujeres, años, más que muchos, muchÃ\xadsimo tiempo en defensa de las mujeres, luchando por la igualdad, y hay constancia de eso y lo hacemos todos los dÃ\xadas, tan es asÃ\xad que yo voy a los estados y estoy hablando con hombres y con mujeres, y no hay esta actitud. Últimamente los de Frenaaa, que es asÃ\xad -guardadas las proporciones- un movimiento sectario, los de Frenaa, pero tampoco tuvieron éxito. '</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.7890 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("alelov/test-model-label2-MiniLMVERSION2") # Run inference preds = model("Não apenas isso. A bola de neve do endividamento") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 1 | 103.4095 | 2340 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 315 | | 1 | 18 | | 2 | 12 | | 3 | 14 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (4, 4) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:--------:|:-------------:|:---------------:| | 0.0002 | 1 | 0.3053 | - | | 0.0080 | 50 | 0.3476 | - | | 0.0160 | 100 | 0.3158 | - | | 0.0239 | 150 | 0.3616 | - | | 0.0319 | 200 | 0.2441 | - | | 0.0399 | 250 | 0.265 | - | | 0.0479 | 300 | 0.2206 | - | | 0.0559 | 350 | 0.1637 | - | | 0.0638 | 400 | 0.1088 | - | | 0.0718 | 450 | 0.0766 | - | | 0.0798 | 500 | 0.0297 | - | | 0.0878 | 550 | 0.0709 | - | | 0.0958 | 600 | 0.018 | - | | 0.1037 | 650 | 0.0359 | - | | 0.1117 | 700 | 0.0111 | - | | 0.1197 | 750 | 0.0512 | - | | 0.1277 | 800 | 0.0022 | - | | 0.1357 | 850 | 0.0011 | - | | 0.1436 | 900 | 0.0036 | - | | 0.1516 | 950 | 0.0021 | - | | 0.1596 | 1000 | 0.0515 | - | | 0.1676 | 1050 | 0.0013 | - | | 0.1756 | 1100 | 0.0193 | - | | 0.1835 | 1150 | 0.0007 | - | | 0.1915 | 1200 | 0.0072 | - | | 0.1995 | 1250 | 0.0004 | - | | 0.2075 | 1300 | 0.0005 | - | | 0.2154 | 1350 | 0.0006 | - | | 0.2234 | 1400 | 0.0014 | - | | 0.2314 | 1450 | 0.0043 | - | | 0.2394 | 1500 | 0.0009 | - | | 0.2474 | 1550 | 0.0005 | - | | 0.2553 | 1600 | 0.0003 | - | | 0.2633 | 1650 | 0.0022 | - | | 0.2713 | 1700 | 0.0037 | - | | 0.2793 | 1750 | 0.0002 | - | | 0.2873 | 1800 | 0.0009 | - | | 0.2952 | 1850 | 0.0089 | - | | 0.3032 | 1900 | 0.0003 | - | | 0.3112 | 1950 | 0.001 | - | | 0.3192 | 2000 | 0.0006 | - | | 0.3272 | 2050 | 0.0005 | - | | 0.3351 | 2100 | 0.0003 | - | | 0.3431 | 2150 | 0.0414 | - | | 0.3511 | 2200 | 0.0136 | - | | 0.3591 | 2250 | 0.0003 | - | | 0.3671 | 2300 | 0.0023 | - | | 0.3750 | 2350 | 0.0002 | - | | 0.3830 | 2400 | 0.0002 | - | | 0.3910 | 2450 | 0.0047 | - | | 0.0002 | 1 | 0.0094 | - | | 0.0080 | 50 | 0.0002 | - | | 0.0160 | 100 | 0.001 | - | | 0.0239 | 150 | 0.0001 | - | | 0.0319 | 200 | 0.0001 | - | | 0.0399 | 250 | 0.0003 | - | | 0.0479 | 300 | 0.0001 | - | | 0.0559 | 350 | 0.0001 | - | | 0.0638 | 400 | 0.0001 | - | | 0.0718 | 450 | 0.0001 | - | | 0.0798 | 500 | 0.0521 | - | | 0.0878 | 550 | 0.0 | - | | 0.0958 | 600 | 0.0003 | - | | 0.1037 | 650 | 0.0011 | - | | 0.1117 | 700 | 0.0001 | - | | 0.1197 | 750 | 0.0006 | - | | 0.1277 | 800 | 0.0006 | - | | 0.1357 | 850 | 0.0 | - | | 0.1436 | 900 | 0.0001 | - | | 0.1516 | 950 | 0.0001 | - | | 0.1596 | 1000 | 0.0016 | - | | 0.1676 | 1050 | 0.0001 | - | | 0.1756 | 1100 | 0.004 | - | | 0.1835 | 1150 | 0.0 | - | | 0.1915 | 1200 | 0.0001 | - | | 0.1995 | 1250 | 0.002 | - | | 0.2075 | 1300 | 0.0004 | - | | 0.2154 | 1350 | 0.0002 | - | | 0.2234 | 1400 | 0.0001 | - | | 0.2314 | 1450 | 0.008 | - | | 0.2394 | 1500 | 0.0001 | - | | 0.2474 | 1550 | 0.0008 | - | | 0.2553 | 1600 | 0.0001 | - | | 0.2633 | 1650 | 0.0002 | - | | 0.2713 | 1700 | 0.0005 | - | | 0.2793 | 1750 | 0.0 | - | | 0.2873 | 1800 | 0.0 | - | | 0.2952 | 1850 | 0.0001 | - | | 0.3032 | 1900 | 0.0 | - | | 0.3112 | 1950 | 0.0 | - | | 0.3192 | 2000 | 0.0002 | - | | 0.3272 | 2050 | 0.0 | - | | 0.3351 | 2100 | 0.0 | - | | 0.3431 | 2150 | 0.0005 | - | | 0.3511 | 2200 | 0.0008 | - | | 0.3591 | 2250 | 0.0001 | - | | 0.3671 | 2300 | 0.0004 | - | | 0.3750 | 2350 | 0.0 | - | | 0.3830 | 2400 | 0.0 | - | | 0.3910 | 2450 | 0.0002 | - | | 0.3990 | 2500 | 0.0 | - | | 0.4070 | 2550 | 0.0 | - | | 0.4149 | 2600 | 0.0001 | - | | 0.4229 | 2650 | 0.0005 | - | | 0.4309 | 2700 | 0.0 | - | | 0.4389 | 2750 | 0.0002 | - | | 0.4469 | 2800 | 0.0032 | - | | 0.4548 | 2850 | 0.0008 | - | | 0.4628 | 2900 | 0.0001 | - | | 0.4708 | 2950 | 0.0001 | - | | 0.4788 | 3000 | 0.0 | - | | 0.4868 | 3050 | 0.0005 | - | | 0.4947 | 3100 | 0.0 | - | | 0.5027 | 3150 | 0.0001 | - | | 0.5107 | 3200 | 0.0 | - | | 0.5187 | 3250 | 0.0 | - | | 0.5267 | 3300 | 0.0 | - | | 0.5346 | 3350 | 0.0 | - | | 0.5426 | 3400 | 0.0 | - | | 0.5506 | 3450 | 0.0004 | - | | 0.5586 | 3500 | 0.0 | - | | 0.5665 | 3550 | 0.0001 | - | | 0.5745 | 3600 | 0.0 | - | | 0.5825 | 3650 | 0.0 | - | | 0.5905 | 3700 | 0.0003 | - | | 0.5985 | 3750 | 0.0 | - | | 0.6064 | 3800 | 0.0001 | - | | 0.6144 | 3850 | 0.0 | - | | 0.6224 | 3900 | 0.0 | - | | 0.6304 | 3950 | 0.0 | - | | 0.6384 | 4000 | 0.0002 | - | | 0.6463 | 4050 | 0.0001 | - | | 0.6543 | 4100 | 0.0 | - | | 0.6623 | 4150 | 0.0 | - | | 0.6703 | 4200 | 0.0005 | - | | 0.6783 | 4250 | 0.0 | - | | 0.6862 | 4300 | 0.0 | - | | 0.6942 | 4350 | 0.0002 | - | | 0.7022 | 4400 | 0.0 | - | | 0.7102 | 4450 | 0.0 | - | | 0.7182 | 4500 | 0.0 | - | | 0.7261 | 4550 | 0.0 | - | | 0.7341 | 4600 | 0.0001 | - | | 0.7421 | 4650 | 0.0 | - | | 0.7501 | 4700 | 0.0 | - | | 0.7581 | 4750 | 0.0 | - | | 0.7660 | 4800 | 0.0 | - | | 0.7740 | 4850 | 0.0675 | - | | 0.7820 | 4900 | 0.0 | - | | 0.7900 | 4950 | 0.0001 | - | | 0.7980 | 5000 | 0.0 | - | | 0.8059 | 5050 | 0.0 | - | | 0.8139 | 5100 | 0.002 | - | | 0.8219 | 5150 | 0.0003 | - | | 0.8299 | 5200 | 0.0001 | - | | 0.8379 | 5250 | 0.0003 | - | | 0.8458 | 5300 | 0.0001 | - | | 0.8538 | 5350 | 0.0 | - | | 0.8618 | 5400 | 0.0 | - | | 0.8698 | 5450 | 0.0 | - | | 0.8778 | 5500 | 0.0 | - | | 0.8857 | 5550 | 0.0 | - | | 0.8937 | 5600 | 0.0 | - | | 0.9017 | 5650 | 0.0 | - | | 0.9097 | 5700 | 0.0001 | - | | 0.9177 | 5750 | 0.0 | - | | 0.9256 | 5800 | 0.0 | - | | 0.9336 | 5850 | 0.0 | - | | 0.9416 | 5900 | 0.0 | - | | 0.9496 | 5950 | 0.0 | - | | 0.9575 | 6000 | 0.0 | - | | 0.9655 | 6050 | 0.0 | - | | 0.9735 | 6100 | 0.0 | - | | 0.9815 | 6150 | 0.0003 | - | | 0.9895 | 6200 | 0.0 | - | | 0.9974 | 6250 | 0.0 | - | | **1.0** | **6266** | **-** | **0.2644** | | 1.0054 | 6300 | 0.0 | - | | 1.0134 | 6350 | 0.0 | - | | 1.0214 | 6400 | 0.0 | - | | 1.0294 | 6450 | 0.0 | - | | 1.0373 | 6500 | 0.0 | - | | 1.0453 | 6550 | 0.0004 | - | | 1.0533 | 6600 | 0.0 | - | | 1.0613 | 6650 | 0.0 | - | | 1.0693 | 6700 | 0.0 | - | | 1.0772 | 6750 | 0.0 | - | | 1.0852 | 6800 | 0.0002 | - | | 1.0932 | 6850 | 0.0 | - | | 1.1012 | 6900 | 0.0 | - | | 1.1092 | 6950 | 0.0 | - | | 1.1171 | 7000 | 0.0 | - | | 1.1251 | 7050 | 0.0 | - | | 1.1331 | 7100 | 0.0 | - | | 1.1411 | 7150 | 0.0 | - | | 1.1491 | 7200 | 0.0 | - | | 1.1570 | 7250 | 0.0 | - | | 1.1650 | 7300 | 0.0 | - | | 1.1730 | 7350 | 0.0 | - | | 1.1810 | 7400 | 0.0 | - | | 1.1890 | 7450 | 0.0 | - | | 1.1969 | 7500 | 0.0423 | - | | 1.2049 | 7550 | 0.0 | - | | 1.2129 | 7600 | 0.0 | - | | 1.2209 | 7650 | 0.0 | - | | 1.2289 | 7700 | 0.0007 | - | | 1.2368 | 7750 | 0.0 | - | | 1.2448 | 7800 | 0.0 | - | | 1.2528 | 7850 | 0.0001 | - | | 1.2608 | 7900 | 0.0 | - | | 1.2688 | 7950 | 0.0001 | - | | 1.2767 | 8000 | 0.0 | - | | 1.2847 | 8050 | 0.0 | - | | 1.2927 | 8100 | 0.0 | - | | 1.3007 | 8150 | 0.0001 | - | | 1.3086 | 8200 | 0.0 | - | | 1.3166 | 8250 | 0.0001 | - | | 1.3246 | 8300 | 0.0 | - | | 1.3326 | 8350 | 0.0 | - | | 1.3406 | 8400 | 0.0 | - | | 1.3485 | 8450 | 0.0 | - | | 1.3565 | 8500 | 0.0 | - | | 1.3645 | 8550 | 0.0 | - | | 1.3725 | 8600 | 0.0 | - | | 1.3805 | 8650 | 0.0 | - | | 1.3884 | 8700 | 0.0 | - | | 1.3964 | 8750 | 0.0 | - | | 1.4044 | 8800 | 0.0 | - | | 1.4124 | 8850 | 0.0 | - | | 1.4204 | 8900 | 0.0 | - | | 1.4283 | 8950 | 0.0 | - | | 1.4363 | 9000 | 0.0 | - | | 1.4443 | 9050 | 0.0 | - | | 1.4523 | 9100 | 0.0 | - | | 1.4603 | 9150 | 0.0 | - | | 1.4682 | 9200 | 0.0 | - | | 1.4762 | 9250 | 0.0 | - | | 1.4842 | 9300 | 0.0242 | - | | 1.4922 | 9350 | 0.0 | - | | 1.5002 | 9400 | 0.0001 | - | | 1.5081 | 9450 | 0.0 | - | | 1.5161 | 9500 | 0.0 | - | | 1.5241 | 9550 | 0.0 | - | | 1.5321 | 9600 | 0.0 | - | | 1.5401 | 9650 | 0.0 | - | | 1.5480 | 9700 | 0.0 | - | | 1.5560 | 9750 | 0.0 | - | | 1.5640 | 9800 | 0.0 | - | | 1.5720 | 9850 | 0.0 | - | | 1.5800 | 9900 | 0.0 | - | | 1.5879 | 9950 | 0.0 | - | | 1.5959 | 10000 | 0.0 | - | | 1.6039 | 10050 | 0.0 | - | | 1.6119 | 10100 | 0.0 | - | | 1.6199 | 10150 | 0.0 | - | | 1.6278 | 10200 | 0.0002 | - | | 1.6358 | 10250 | 0.0001 | - | | 1.6438 | 10300 | 0.0 | - | | 1.6518 | 10350 | 0.0 | - | | 1.6598 | 10400 | 0.0 | - | | 1.6677 | 10450 | 0.0 | - | | 1.6757 | 10500 | 0.0 | - | | 1.6837 | 10550 | 0.0 | - | | 1.6917 | 10600 | 0.0 | - | | 1.6996 | 10650 | 0.0 | - | | 1.7076 | 10700 | 0.0 | - | | 1.7156 | 10750 | 0.0 | - | | 1.7236 | 10800 | 0.0 | - | | 1.7316 | 10850 | 0.0 | - | | 1.7395 | 10900 | 0.0 | - | | 1.7475 | 10950 | 0.0 | - | | 1.7555 | 11000 | 0.0 | - | | 1.7635 | 11050 | 0.0 | - | | 1.7715 | 11100 | 0.0 | - | | 1.7794 | 11150 | 0.0 | - | | 1.7874 | 11200 | 0.0002 | - | | 1.7954 | 11250 | 0.0228 | - | | 1.8034 | 11300 | 0.0 | - | | 1.8114 | 11350 | 0.0 | - | | 1.8193 | 11400 | 0.0 | - | | 1.8273 | 11450 | 0.0 | - | | 1.8353 | 11500 | 0.0 | - | | 1.8433 | 11550 | 0.0 | - | | 1.8513 | 11600 | 0.0 | - | | 1.8592 | 11650 | 0.0 | - | | 1.8672 | 11700 | 0.0 | - | | 1.8752 | 11750 | 0.0 | - | | 1.8832 | 11800 | 0.0 | - | | 1.8912 | 11850 | 0.0 | - | | 1.8991 | 11900 | 0.0 | - | | 1.9071 | 11950 | 0.0 | - | | 1.9151 | 12000 | 0.0 | - | | 1.9231 | 12050 | 0.0 | - | | 1.9311 | 12100 | 0.0 | - | | 1.9390 | 12150 | 0.0 | - | | 1.9470 | 12200 | 0.0 | - | | 1.9550 | 12250 | 0.0 | - | | 1.9630 | 12300 | 0.0 | - | | 1.9710 | 12350 | 0.0 | - | | 1.9789 | 12400 | 0.0 | - | | 1.9869 | 12450 | 0.0 | - | | 1.9949 | 12500 | 0.0 | - | | 2.0 | 12532 | - | 0.2568 | | 2.0029 | 12550 | 0.0 | - | | 2.0109 | 12600 | 0.0 | - | | 2.0188 | 12650 | 0.0 | - | | 2.0268 | 12700 | 0.0 | - | | 2.0348 | 12750 | 0.0 | - | | 2.0428 | 12800 | 0.0 | - | | 2.0508 | 12850 | 0.0 | - | | 2.0587 | 12900 | 0.0 | - | | 2.0667 | 12950 | 0.0 | - | | 2.0747 | 13000 | 0.0 | - | | 2.0827 | 13050 | 0.0 | - | | 2.0906 | 13100 | 0.0 | - | | 2.0986 | 13150 | 0.0 | - | | 2.1066 | 13200 | 0.0 | - | | 2.1146 | 13250 | 0.0 | - | | 2.1226 | 13300 | 0.0 | - | | 2.1305 | 13350 | 0.0 | - | | 2.1385 | 13400 | 0.0 | - | | 2.1465 | 13450 | 0.0 | - | | 2.1545 | 13500 | 0.0 | - | | 2.1625 | 13550 | 0.005 | - | | 2.1704 | 13600 | 0.0 | - | | 2.1784 | 13650 | 0.0 | - | | 2.1864 | 13700 | 0.0 | - | | 2.1944 | 13750 | 0.0 | - | | 2.2024 | 13800 | 0.0 | - | | 2.2103 | 13850 | 0.0 | - | | 2.2183 | 13900 | 0.0 | - | | 2.2263 | 13950 | 0.0 | - | | 2.2343 | 14000 | 0.0 | - | | 2.2423 | 14050 | 0.0 | - | | 2.2502 | 14100 | 0.0 | - | | 2.2582 | 14150 | 0.0 | - | | 2.2662 | 14200 | 0.0 | - | | 2.2742 | 14250 | 0.0 | - | | 2.2822 | 14300 | 0.0 | - | | 2.2901 | 14350 | 0.0005 | - | | 2.2981 | 14400 | 0.0 | - | | 2.3061 | 14450 | 0.0001 | - | | 2.3141 | 14500 | 0.0 | - | | 2.3221 | 14550 | 0.0 | - | | 2.3300 | 14600 | 0.0 | - | | 2.3380 | 14650 | 0.0012 | - | | 2.3460 | 14700 | 0.0 | - | | 2.3540 | 14750 | 0.0 | - | | 2.3620 | 14800 | 0.0 | - | | 2.3699 | 14850 | 0.0 | - | | 2.3779 | 14900 | 0.0 | - | | 2.3859 | 14950 | 0.0 | - | | 2.3939 | 15000 | 0.0 | - | | 2.4019 | 15050 | 0.0 | - | | 2.4098 | 15100 | 0.0 | - | | 2.4178 | 15150 | 0.0 | - | | 2.4258 | 15200 | 0.0 | - | | 2.4338 | 15250 | 0.0017 | - | | 2.4417 | 15300 | 0.0 | - | | 2.4497 | 15350 | 0.0 | - | | 2.4577 | 15400 | 0.0 | - | | 2.4657 | 15450 | 0.0 | - | | 2.4737 | 15500 | 0.0 | - | | 2.4816 | 15550 | 0.0 | - | | 2.4896 | 15600 | 0.0 | - | | 2.4976 | 15650 | 0.0 | - | | 2.5056 | 15700 | 0.0 | - | | 2.5136 | 15750 | 0.0 | - | | 2.5215 | 15800 | 0.0002 | - | | 2.5295 | 15850 | 0.0 | - | | 2.5375 | 15900 | 0.0 | - | | 2.5455 | 15950 | 0.0 | - | | 2.5535 | 16000 | 0.0 | - | | 2.5614 | 16050 | 0.0 | - | | 2.5694 | 16100 | 0.0 | - | | 2.5774 | 16150 | 0.0 | - | | 2.5854 | 16200 | 0.0 | - | | 2.5934 | 16250 | 0.0 | - | | 2.6013 | 16300 | 0.0 | - | | 2.6093 | 16350 | 0.0 | - | | 2.6173 | 16400 | 0.0 | - | | 2.6253 | 16450 | 0.0 | - | | 2.6333 | 16500 | 0.0 | - | | 2.6412 | 16550 | 0.0 | - | | 2.6492 | 16600 | 0.0 | - | | 2.6572 | 16650 | 0.0 | - | | 2.6652 | 16700 | 0.0 | - | | 2.6732 | 16750 | 0.0 | - | | 2.6811 | 16800 | 0.0 | - | | 2.6891 | 16850 | 0.0 | - | | 2.6971 | 16900 | 0.0 | - | | 2.7051 | 16950 | 0.0 | - | | 2.7131 | 17000 | 0.0 | - | | 2.7210 | 17050 | 0.0 | - | | 2.7290 | 17100 | 0.0 | - | | 2.7370 | 17150 | 0.0 | - | | 2.7450 | 17200 | 0.0 | - | | 2.7530 | 17250 | 0.0 | - | | 2.7609 | 17300 | 0.0 | - | | 2.7689 | 17350 | 0.0 | - | | 2.7769 | 17400 | 0.0 | - | | 2.7849 | 17450 | 0.0 | - | | 2.7929 | 17500 | 0.0 | - | | 2.8008 | 17550 | 0.0 | - | | 2.8088 | 17600 | 0.0 | - | | 2.8168 | 17650 | 0.0 | - | | 2.8248 | 17700 | 0.0 | - | | 2.8327 | 17750 | 0.0001 | - | | 2.8407 | 17800 | 0.0 | - | | 2.8487 | 17850 | 0.0 | - | | 2.8567 | 17900 | 0.0 | - | | 2.8647 | 17950 | 0.0 | - | | 2.8726 | 18000 | 0.0623 | - | | 2.8806 | 18050 | 0.0 | - | | 2.8886 | 18100 | 0.0 | - | | 2.8966 | 18150 | 0.0 | - | | 2.9046 | 18200 | 0.0 | - | | 2.9125 | 18250 | 0.0 | - | | 2.9205 | 18300 | 0.0 | - | | 2.9285 | 18350 | 0.0 | - | | 2.9365 | 18400 | 0.0 | - | | 2.9445 | 18450 | 0.0 | - | | 2.9524 | 18500 | 0.0 | - | | 2.9604 | 18550 | 0.0 | - | | 2.9684 | 18600 | 0.0 | - | | 2.9764 | 18650 | 0.0 | - | | 2.9844 | 18700 | 0.0 | - | | 2.9923 | 18750 | 0.0 | - | | 3.0 | 18798 | - | 0.2418 | | 3.0003 | 18800 | 0.0 | - | | 3.0083 | 18850 | 0.0 | - | | 3.0163 | 18900 | 0.0 | - | | 3.0243 | 18950 | 0.0 | - | | 3.0322 | 19000 | 0.0 | - | | 3.0402 | 19050 | 0.0 | - | | 3.0482 | 19100 | 0.0 | - | | 3.0562 | 19150 | 0.0 | - | | 3.0642 | 19200 | 0.0 | - | | 3.0721 | 19250 | 0.0 | - | | 3.0801 | 19300 | 0.0 | - | | 3.0881 | 19350 | 0.0 | - | | 3.0961 | 19400 | 0.0 | - | | 3.1041 | 19450 | 0.0 | - | | 3.1120 | 19500 | 0.0 | - | | 3.1200 | 19550 | 0.0 | - | | 3.1280 | 19600 | 0.0 | - | | 3.1360 | 19650 | 0.0 | - | | 3.1440 | 19700 | 0.0 | - | | 3.1519 | 19750 | 0.0 | - | | 3.1599 | 19800 | 0.0 | - | | 3.1679 | 19850 | 0.0 | - | | 3.1759 | 19900 | 0.0 | - | | 3.1838 | 19950 | 0.0 | - | | 3.1918 | 20000 | 0.0 | - | | 3.1998 | 20050 | 0.0 | - | | 3.2078 | 20100 | 0.0 | - | | 3.2158 | 20150 | 0.0 | - | | 3.2237 | 20200 | 0.0 | - | | 3.2317 | 20250 | 0.0 | - | | 3.2397 | 20300 | 0.0448 | - | | 3.2477 | 20350 | 0.0 | - | | 3.2557 | 20400 | 0.0 | - | | 3.2636 | 20450 | 0.0 | - | | 3.2716 | 20500 | 0.0001 | - | | 3.2796 | 20550 | 0.0102 | - | | 3.2876 | 20600 | 0.0 | - | | 3.2956 | 20650 | 0.0 | - | | 3.3035 | 20700 | 0.0 | - | | 3.3115 | 20750 | 0.0 | - | | 3.3195 | 20800 | 0.0 | - | | 3.3275 | 20850 | 0.0 | - | | 3.3355 | 20900 | 0.0 | - | | 3.3434 | 20950 | 0.0 | - | | 3.3514 | 21000 | 0.0 | - | | 3.3594 | 21050 | 0.0 | - | | 3.3674 | 21100 | 0.0 | - | | 3.3754 | 21150 | 0.0 | - | | 3.3833 | 21200 | 0.0 | - | | 3.3913 | 21250 | 0.0 | - | | 3.3993 | 21300 | 0.0 | - | | 3.4073 | 21350 | 0.0 | - | | 3.4153 | 21400 | 0.0 | - | | 3.4232 | 21450 | 0.0 | - | | 3.4312 | 21500 | 0.0 | - | | 3.4392 | 21550 | 0.0 | - | | 3.4472 | 21600 | 0.0 | - | | 3.4552 | 21650 | 0.0 | - | | 3.4631 | 21700 | 0.0 | - | | 3.4711 | 21750 | 0.0 | - | | 3.4791 | 21800 | 0.0 | - | | 3.4871 | 21850 | 0.0 | - | | 3.4951 | 21900 | 0.0 | - | | 3.5030 | 21950 | 0.0 | - | | 3.5110 | 22000 | 0.0 | - | | 3.5190 | 22050 | 0.0 | - | | 3.5270 | 22100 | 0.0 | - | | 3.5350 | 22150 | 0.0 | - | | 3.5429 | 22200 | 0.0 | - | | 3.5509 | 22250 | 0.0 | - | | 3.5589 | 22300 | 0.0 | - | | 3.5669 | 22350 | 0.0 | - | | 3.5748 | 22400 | 0.0 | - | | 3.5828 | 22450 | 0.0 | - | | 3.5908 | 22500 | 0.0 | - | | 3.5988 | 22550 | 0.0 | - | | 3.6068 | 22600 | 0.0 | - | | 3.6147 | 22650 | 0.0 | - | | 3.6227 | 22700 | 0.0 | - | | 3.6307 | 22750 | 0.0 | - | | 3.6387 | 22800 | 0.0 | - | | 3.6467 | 22850 | 0.0 | - | | 3.6546 | 22900 | 0.0 | - | | 3.6626 | 22950 | 0.0 | - | | 3.6706 | 23000 | 0.0 | - | | 3.6786 | 23050 | 0.0 | - | | 3.6866 | 23100 | 0.0 | - | | 3.6945 | 23150 | 0.0 | - | | 3.7025 | 23200 | 0.0 | - | | 3.7105 | 23250 | 0.0 | - | | 3.7185 | 23300 | 0.0 | - | | 3.7265 | 23350 | 0.0 | - | | 3.7344 | 23400 | 0.0 | - | | 3.7424 | 23450 | 0.0 | - | | 3.7504 | 23500 | 0.0 | - | | 3.7584 | 23550 | 0.0 | - | | 3.7664 | 23600 | 0.0 | - | | 3.7743 | 23650 | 0.0 | - | | 3.7823 | 23700 | 0.0 | - | | 3.7903 | 23750 | 0.0 | - | | 3.7983 | 23800 | 0.0 | - | | 3.8063 | 23850 | 0.0 | - | | 3.8142 | 23900 | 0.0 | - | | 3.8222 | 23950 | 0.0 | - | | 3.8302 | 24000 | 0.0 | - | | 3.8382 | 24050 | 0.0 | - | | 3.8462 | 24100 | 0.0 | - | | 3.8541 | 24150 | 0.0 | - | | 3.8621 | 24200 | 0.0 | - | | 3.8701 | 24250 | 0.0 | - | | 3.8781 | 24300 | 0.0 | - | | 3.8861 | 24350 | 0.0 | - | | 3.8940 | 24400 | 0.0 | - | | 3.9020 | 24450 | 0.0 | - | | 3.9100 | 24500 | 0.0 | - | | 3.9180 | 24550 | 0.0 | - | | 3.9259 | 24600 | 0.0 | - | | 3.9339 | 24650 | 0.0 | - | | 3.9419 | 24700 | 0.0 | - | | 3.9499 | 24750 | 0.0 | - | | 3.9579 | 24800 | 0.0 | - | | 3.9658 | 24850 | 0.0 | - | | 3.9738 | 24900 | 0.0 | - | | 3.9818 | 24950 | 0.0 | - | | 3.9898 | 25000 | 0.0 | - | | 3.9978 | 25050 | 0.0 | - | | 4.0 | 25064 | - | 0.2438 | | 0.0002 | 1 | 0.0 | - | | 0.0080 | 50 | 0.0 | - | | 0.0160 | 100 | 0.0 | - | | 0.0239 | 150 | 0.0 | - | | 0.0319 | 200 | 0.0 | - | | 0.0399 | 250 | 0.0 | - | | 0.0479 | 300 | 0.0 | - | | 0.0559 | 350 | 0.0 | - | | 0.0638 | 400 | 0.0 | - | | 0.0718 | 450 | 0.0 | - | | 0.0798 | 500 | 0.0 | - | | 0.0878 | 550 | 0.0 | - | | 0.0958 | 600 | 0.0 | - | | 0.1037 | 650 | 0.0 | - | | 0.1117 | 700 | 0.0 | - | | 0.1197 | 750 | 0.0 | - | | 0.1277 | 800 | 0.0 | - | | 0.1357 | 850 | 0.0 | - | | 0.1436 | 900 | 0.0 | - | | 0.1516 | 950 | 0.0 | - | | 0.1596 | 1000 | 0.0 | - | | 0.1676 | 1050 | 0.0 | - | | 0.1756 | 1100 | 0.0 | - | | 0.1835 | 1150 | 0.0 | - | | 0.1915 | 1200 | 0.0 | - | | 0.1995 | 1250 | 0.0 | - | | 0.2075 | 1300 | 0.0 | - | | 0.2154 | 1350 | 0.0 | - | | 0.2234 | 1400 | 0.0 | - | | 0.2314 | 1450 | 0.0019 | - | | 0.2394 | 1500 | 0.0 | - | | 0.2474 | 1550 | 0.0 | - | | 0.2553 | 1600 | 0.0 | - | | 0.2633 | 1650 | 0.0 | - | | 0.2713 | 1700 | 0.0 | - | | 0.2793 | 1750 | 0.0 | - | | 0.2873 | 1800 | 0.0 | - | | 0.2952 | 1850 | 0.0 | - | | 0.3032 | 1900 | 0.0 | - | | 0.3112 | 1950 | 0.0 | - | | 0.3192 | 2000 | 0.0 | - | | 0.3272 | 2050 | 0.0 | - | | 0.3351 | 2100 | 0.0 | - | | 0.3431 | 2150 | 0.0001 | - | | 0.3511 | 2200 | 0.0319 | - | | 0.3591 | 2250 | 0.0 | - | | 0.3671 | 2300 | 0.0 | - | | 0.3750 | 2350 | 0.0 | - | | 0.3830 | 2400 | 0.0 | - | | 0.3910 | 2450 | 0.0002 | - | | 0.3990 | 2500 | 0.0 | - | | 0.4070 | 2550 | 0.0 | - | | 0.4149 | 2600 | 0.0 | - | | 0.4229 | 2650 | 0.0 | - | | 0.4309 | 2700 | 0.0 | - | | 0.4389 | 2750 | 0.0001 | - | | 0.4469 | 2800 | 0.0 | - | | 0.4548 | 2850 | 0.0 | - | | 0.4628 | 2900 | 0.0 | - | | 0.4708 | 2950 | 0.0 | - | | 0.4788 | 3000 | 0.0 | - | | 0.4868 | 3050 | 0.0 | - | | 0.4947 | 3100 | 0.0 | - | | 0.5027 | 3150 | 0.0 | - | | 0.5107 | 3200 | 0.0 | - | | 0.5187 | 3250 | 0.0 | - | | 0.5267 | 3300 | 0.0 | - | | 0.5346 | 3350 | 0.0 | - | | 0.5426 | 3400 | 0.0 | - | | 0.5506 | 3450 | 0.0 | - | | 0.5586 | 3500 | 0.0 | - | | 0.5665 | 3550 | 0.0003 | - | | 0.5745 | 3600 | 0.0 | - | | 0.5825 | 3650 | 0.0 | - | | 0.5905 | 3700 | 0.0 | - | | 0.5985 | 3750 | 0.0 | - | | 0.6064 | 3800 | 0.0 | - | | 0.6144 | 3850 | 0.0 | - | | 0.6224 | 3900 | 0.0 | - | | 0.6304 | 3950 | 0.0 | - | | 0.6384 | 4000 | 0.0 | - | | 0.6463 | 4050 | 0.0 | - | | 0.6543 | 4100 | 0.0 | - | | 0.6623 | 4150 | 0.0 | - | | 0.6703 | 4200 | 0.0 | - | | 0.6783 | 4250 | 0.0 | - | | 0.6862 | 4300 | 0.0 | - | | 0.6942 | 4350 | 0.0 | - | | 0.7022 | 4400 | 0.0 | - | | 0.7102 | 4450 | 0.0 | - | | 0.7182 | 4500 | 0.0 | - | | 0.7261 | 4550 | 0.0 | - | | 0.7341 | 4600 | 0.0 | - | | 0.7421 | 4650 | 0.0 | - | | 0.7501 | 4700 | 0.0 | - | | 0.7581 | 4750 | 0.0 | - | | 0.7660 | 4800 | 0.0 | - | | 0.7740 | 4850 | 0.0602 | - | | 0.7820 | 4900 | 0.0 | - | | 0.7900 | 4950 | 0.0 | - | | 0.7980 | 5000 | 0.0 | - | | 0.8059 | 5050 | 0.0 | - | | 0.8139 | 5100 | 0.0002 | - | | 0.8219 | 5150 | 0.0 | - | | 0.8299 | 5200 | 0.0001 | - | | 0.8379 | 5250 | 0.0 | - | | 0.8458 | 5300 | 0.0 | - | | 0.8538 | 5350 | 0.0 | - | | 0.8618 | 5400 | 0.0 | - | | 0.8698 | 5450 | 0.0 | - | | 0.8778 | 5500 | 0.0 | - | | 0.8857 | 5550 | 0.0 | - | | 0.8937 | 5600 | 0.0 | - | | 0.9017 | 5650 | 0.0 | - | | 0.9097 | 5700 | 0.0 | - | | 0.9177 | 5750 | 0.0 | - | | 0.9256 | 5800 | 0.0 | - | | 0.9336 | 5850 | 0.0 | - | | 0.9416 | 5900 | 0.0 | - | | 0.9496 | 5950 | 0.0 | - | | 0.9575 | 6000 | 0.0 | - | | 0.9655 | 6050 | 0.0 | - | | 0.9735 | 6100 | 0.0 | - | | 0.9815 | 6150 | 0.0 | - | | 0.9895 | 6200 | 0.0 | - | | 0.9974 | 6250 | 0.0 | - | | **1.0** | **6266** | **-** | **0.2299** | | 1.0054 | 6300 | 0.0 | - | | 1.0134 | 6350 | 0.0 | - | | 1.0214 | 6400 | 0.0 | - | | 1.0294 | 6450 | 0.0 | - | | 1.0373 | 6500 | 0.0 | - | | 1.0453 | 6550 | 0.0 | - | | 1.0533 | 6600 | 0.0 | - | | 1.0613 | 6650 | 0.0 | - | | 1.0693 | 6700 | 0.0 | - | | 1.0772 | 6750 | 0.0 | - | | 1.0852 | 6800 | 0.0 | - | | 1.0932 | 6850 | 0.0 | - | | 1.1012 | 6900 | 0.0 | - | | 1.1092 | 6950 | 0.0 | - | | 1.1171 | 7000 | 0.0 | - | | 1.1251 | 7050 | 0.0 | - | | 1.1331 | 7100 | 0.0604 | - | | 1.1411 | 7150 | 0.0007 | - | | 1.1491 | 7200 | 0.0002 | - | | 1.1570 | 7250 | 0.0 | - | | 1.1650 | 7300 | 0.0 | - | | 1.1730 | 7350 | 0.0 | - | | 1.1810 | 7400 | 0.0 | - | | 1.1890 | 7450 | 0.0 | - | | 1.1969 | 7500 | 0.0395 | - | | 1.2049 | 7550 | 0.0 | - | | 1.2129 | 7600 | 0.0 | - | | 1.2209 | 7650 | 0.0 | - | | 1.2289 | 7700 | 0.0 | - | | 1.2368 | 7750 | 0.0 | - | | 1.2448 | 7800 | 0.0 | - | | 1.2528 | 7850 | 0.0 | - | | 1.2608 | 7900 | 0.0 | - | | 1.2688 | 7950 | 0.0 | - | | 1.2767 | 8000 | 0.0 | - | | 1.2847 | 8050 | 0.0 | - | | 1.2927 | 8100 | 0.0 | - | | 1.3007 | 8150 | 0.0002 | - | | 1.3086 | 8200 | 0.0 | - | | 1.3166 | 8250 | 0.0 | - | | 1.3246 | 8300 | 0.0 | - | | 1.3326 | 8350 | 0.0 | - | | 1.3406 | 8400 | 0.0 | - | | 1.3485 | 8450 | 0.0 | - | | 1.3565 | 8500 | 0.0 | - | | 1.3645 | 8550 | 0.0 | - | | 1.3725 | 8600 | 0.0 | - | | 1.3805 | 8650 | 0.0 | - | | 1.3884 | 8700 | 0.0 | - | | 1.3964 | 8750 | 0.0 | - | | 1.4044 | 8800 | 0.0 | - | | 1.4124 | 8850 | 0.0 | - | | 1.4204 | 8900 | 0.0 | - | | 1.4283 | 8950 | 0.0 | - | | 1.4363 | 9000 | 0.0 | - | | 1.4443 | 9050 | 0.0 | - | | 1.4523 | 9100 | 0.0 | - | | 1.4603 | 9150 | 0.0 | - | | 1.4682 | 9200 | 0.0 | - | | 1.4762 | 9250 | 0.0 | - | | 1.4842 | 9300 | 0.0093 | - | | 1.4922 | 9350 | 0.0 | - | | 1.5002 | 9400 | 0.0 | - | | 1.5081 | 9450 | 0.0 | - | | 1.5161 | 9500 | 0.0 | - | | 1.5241 | 9550 | 0.0 | - | | 1.5321 | 9600 | 0.0 | - | | 1.5401 | 9650 | 0.0 | - | | 1.5480 | 9700 | 0.0 | - | | 1.5560 | 9750 | 0.0 | - | | 1.5640 | 9800 | 0.0 | - | | 1.5720 | 9850 | 0.0 | - | | 1.5800 | 9900 | 0.0 | - | | 1.5879 | 9950 | 0.0 | - | | 1.5959 | 10000 | 0.0 | - | | 1.6039 | 10050 | 0.0 | - | | 1.6119 | 10100 | 0.0 | - | | 1.6199 | 10150 | 0.0 | - | | 1.6278 | 10200 | 0.0001 | - | | 1.6358 | 10250 | 0.0 | - | | 1.6438 | 10300 | 0.0 | - | | 1.6518 | 10350 | 0.0 | - | | 1.6598 | 10400 | 0.0 | - | | 1.6677 | 10450 | 0.0 | - | | 1.6757 | 10500 | 0.0 | - | | 1.6837 | 10550 | 0.0 | - | | 1.6917 | 10600 | 0.0 | - | | 1.6996 | 10650 | 0.0 | - | | 1.7076 | 10700 | 0.0 | - | | 1.7156 | 10750 | 0.0 | - | | 1.7236 | 10800 | 0.0 | - | | 1.7316 | 10850 | 0.0 | - | | 1.7395 | 10900 | 0.0 | - | | 1.7475 | 10950 | 0.0 | - | | 1.7555 | 11000 | 0.0 | - | | 1.7635 | 11050 | 0.0 | - | | 1.7715 | 11100 | 0.0 | - | | 1.7794 | 11150 | 0.0 | - | | 1.7874 | 11200 | 0.0 | - | | 1.7954 | 11250 | 0.0289 | - | | 1.8034 | 11300 | 0.0 | - | | 1.8114 | 11350 | 0.0 | - | | 1.8193 | 11400 | 0.0 | - | | 1.8273 | 11450 | 0.0 | - | | 1.8353 | 11500 | 0.0 | - | | 1.8433 | 11550 | 0.0 | - | | 1.8513 | 11600 | 0.0 | - | | 1.8592 | 11650 | 0.0 | - | | 1.8672 | 11700 | 0.0 | - | | 1.8752 | 11750 | 0.0 | - | | 1.8832 | 11800 | 0.0 | - | | 1.8912 | 11850 | 0.0 | - | | 1.8991 | 11900 | 0.0 | - | | 1.9071 | 11950 | 0.0 | - | | 1.9151 | 12000 | 0.0 | - | | 1.9231 | 12050 | 0.0 | - | | 1.9311 | 12100 | 0.0 | - | | 1.9390 | 12150 | 0.0 | - | | 1.9470 | 12200 | 0.0 | - | | 1.9550 | 12250 | 0.0 | - | | 1.9630 | 12300 | 0.0 | - | | 1.9710 | 12350 | 0.0 | - | | 1.9789 | 12400 | 0.0 | - | | 1.9869 | 12450 | 0.0 | - | | 1.9949 | 12500 | 0.0 | - | | 2.0 | 12532 | - | 0.2718 | | 2.0029 | 12550 | 0.0 | - | | 2.0109 | 12600 | 0.0 | - | | 2.0188 | 12650 | 0.0 | - | | 2.0268 | 12700 | 0.0 | - | | 2.0348 | 12750 | 0.0 | - | | 2.0428 | 12800 | 0.0 | - | | 2.0508 | 12850 | 0.0 | - | | 2.0587 | 12900 | 0.0 | - | | 2.0667 | 12950 | 0.0 | - | | 2.0747 | 13000 | 0.0 | - | | 2.0827 | 13050 | 0.0 | - | | 2.0906 | 13100 | 0.0 | - | | 2.0986 | 13150 | 0.0 | - | | 2.1066 | 13200 | 0.0 | - | | 2.1146 | 13250 | 0.0 | - | | 2.1226 | 13300 | 0.0 | - | | 2.1305 | 13350 | 0.0 | - | | 2.1385 | 13400 | 0.0 | - | | 2.1465 | 13450 | 0.0 | - | | 2.1545 | 13500 | 0.0 | - | | 2.1625 | 13550 | 0.0037 | - | | 2.1704 | 13600 | 0.0 | - | | 2.1784 | 13650 | 0.0 | - | | 2.1864 | 13700 | 0.0 | - | | 2.1944 | 13750 | 0.0 | - | | 2.2024 | 13800 | 0.0 | - | | 2.2103 | 13850 | 0.0 | - | | 2.2183 | 13900 | 0.0 | - | | 2.2263 | 13950 | 0.0 | - | | 2.2343 | 14000 | 0.0 | - | | 2.2423 | 14050 | 0.0 | - | | 2.2502 | 14100 | 0.0 | - | | 2.2582 | 14150 | 0.0 | - | | 2.2662 | 14200 | 0.0 | - | | 2.2742 | 14250 | 0.0 | - | | 2.2822 | 14300 | 0.0 | - | | 2.2901 | 14350 | 0.0009 | - | | 2.2981 | 14400 | 0.0 | - | | 2.3061 | 14450 | 0.0 | - | | 2.3141 | 14500 | 0.0 | - | | 2.3221 | 14550 | 0.0 | - | | 2.3300 | 14600 | 0.0 | - | | 2.3380 | 14650 | 0.0028 | - | | 2.3460 | 14700 | 0.0 | - | | 2.3540 | 14750 | 0.0 | - | | 2.3620 | 14800 | 0.0 | - | | 2.3699 | 14850 | 0.0 | - | | 2.3779 | 14900 | 0.0 | - | | 2.3859 | 14950 | 0.0 | - | | 2.3939 | 15000 | 0.0 | - | | 2.4019 | 15050 | 0.0 | - | | 2.4098 | 15100 | 0.0 | - | | 2.4178 | 15150 | 0.0 | - | | 2.4258 | 15200 | 0.0 | - | | 2.4338 | 15250 | 0.0022 | - | | 2.4417 | 15300 | 0.0 | - | | 2.4497 | 15350 | 0.0 | - | | 2.4577 | 15400 | 0.0 | - | | 2.4657 | 15450 | 0.0 | - | | 2.4737 | 15500 | 0.0 | - | | 2.4816 | 15550 | 0.0 | - | | 2.4896 | 15600 | 0.0 | - | | 2.4976 | 15650 | 0.0 | - | | 2.5056 | 15700 | 0.0 | - | | 2.5136 | 15750 | 0.0 | - | | 2.5215 | 15800 | 0.0001 | - | | 2.5295 | 15850 | 0.0 | - | | 2.5375 | 15900 | 0.0 | - | | 2.5455 | 15950 | 0.0 | - | | 2.5535 | 16000 | 0.0 | - | | 2.5614 | 16050 | 0.0 | - | | 2.5694 | 16100 | 0.0 | - | | 2.5774 | 16150 | 0.0 | - | | 2.5854 | 16200 | 0.0 | - | | 2.5934 | 16250 | 0.0 | - | | 2.6013 | 16300 | 0.0 | - | | 2.6093 | 16350 | 0.0 | - | | 2.6173 | 16400 | 0.0 | - | | 2.6253 | 16450 | 0.0 | - | | 2.6333 | 16500 | 0.0 | - | | 2.6412 | 16550 | 0.0 | - | | 2.6492 | 16600 | 0.0 | - | | 2.6572 | 16650 | 0.0 | - | | 2.6652 | 16700 | 0.0 | - | | 2.6732 | 16750 | 0.0 | - | | 2.6811 | 16800 | 0.0 | - | | 2.6891 | 16850 | 0.0 | - | | 2.6971 | 16900 | 0.0 | - | | 2.7051 | 16950 | 0.0 | - | | 2.7131 | 17000 | 0.0 | - | | 2.7210 | 17050 | 0.0 | - | | 2.7290 | 17100 | 0.0 | - | | 2.7370 | 17150 | 0.0 | - | | 2.7450 | 17200 | 0.0 | - | | 2.7530 | 17250 | 0.0 | - | | 2.7609 | 17300 | 0.0 | - | | 2.7689 | 17350 | 0.0 | - | | 2.7769 | 17400 | 0.0 | - | | 2.7849 | 17450 | 0.0 | - | | 2.7929 | 17500 | 0.0 | - | | 2.8008 | 17550 | 0.0 | - | | 2.8088 | 17600 | 0.0 | - | | 2.8168 | 17650 | 0.0 | - | | 2.8248 | 17700 | 0.0 | - | | 2.8327 | 17750 | 0.0 | - | | 2.8407 | 17800 | 0.0 | - | | 2.8487 | 17850 | 0.0 | - | | 2.8567 | 17900 | 0.0 | - | | 2.8647 | 17950 | 0.0 | - | | 2.8726 | 18000 | 0.0624 | - | | 2.8806 | 18050 | 0.0 | - | | 2.8886 | 18100 | 0.0 | - | | 2.8966 | 18150 | 0.0 | - | | 2.9046 | 18200 | 0.0 | - | | 2.9125 | 18250 | 0.0 | - | | 2.9205 | 18300 | 0.0 | - | | 2.9285 | 18350 | 0.0 | - | | 2.9365 | 18400 | 0.0 | - | | 2.9445 | 18450 | 0.0 | - | | 2.9524 | 18500 | 0.0 | - | | 2.9604 | 18550 | 0.0 | - | | 2.9684 | 18600 | 0.0 | - | | 2.9764 | 18650 | 0.0 | - | | 2.9844 | 18700 | 0.0 | - | | 2.9923 | 18750 | 0.0 | - | | 3.0 | 18798 | - | 0.2642 | | 3.0003 | 18800 | 0.0 | - | | 3.0083 | 18850 | 0.0 | - | | 3.0163 | 18900 | 0.0 | - | | 3.0243 | 18950 | 0.0 | - | | 3.0322 | 19000 | 0.0 | - | | 3.0402 | 19050 | 0.0 | - | | 3.0482 | 19100 | 0.0 | - | | 3.0562 | 19150 | 0.0 | - | | 3.0642 | 19200 | 0.0 | - | | 3.0721 | 19250 | 0.0 | - | | 3.0801 | 19300 | 0.0 | - | | 3.0881 | 19350 | 0.0 | - | | 3.0961 | 19400 | 0.0 | - | | 3.1041 | 19450 | 0.0 | - | | 3.1120 | 19500 | 0.0 | - | | 3.1200 | 19550 | 0.0 | - | | 3.1280 | 19600 | 0.0 | - | | 3.1360 | 19650 | 0.0 | - | | 3.1440 | 19700 | 0.0 | - | | 3.1519 | 19750 | 0.0 | - | | 3.1599 | 19800 | 0.0 | - | | 3.1679 | 19850 | 0.0 | - | | 3.1759 | 19900 | 0.0 | - | | 3.1838 | 19950 | 0.0 | - | | 3.1918 | 20000 | 0.0 | - | | 3.1998 | 20050 | 0.0 | - | | 3.2078 | 20100 | 0.0 | - | | 3.2158 | 20150 | 0.0 | - | | 3.2237 | 20200 | 0.0 | - | | 3.2317 | 20250 | 0.0 | - | | 3.2397 | 20300 | 0.0418 | - | | 3.2477 | 20350 | 0.0 | - | | 3.2557 | 20400 | 0.0 | - | | 3.2636 | 20450 | 0.0 | - | | 3.2716 | 20500 | 0.0 | - | | 3.2796 | 20550 | 0.0077 | - | | 3.2876 | 20600 | 0.0 | - | | 3.2956 | 20650 | 0.0 | - | | 3.3035 | 20700 | 0.0 | - | | 3.3115 | 20750 | 0.0 | - | | 3.3195 | 20800 | 0.0 | - | | 3.3275 | 20850 | 0.0 | - | | 3.3355 | 20900 | 0.0 | - | | 3.3434 | 20950 | 0.0 | - | | 3.3514 | 21000 | 0.0 | - | | 3.3594 | 21050 | 0.0 | - | | 3.3674 | 21100 | 0.0 | - | | 3.3754 | 21150 | 0.0 | - | | 3.3833 | 21200 | 0.0 | - | | 3.3913 | 21250 | 0.0 | - | | 3.3993 | 21300 | 0.0 | - | | 3.4073 | 21350 | 0.0 | - | | 3.4153 | 21400 | 0.0 | - | | 3.4232 | 21450 | 0.0 | - | | 3.4312 | 21500 | 0.0 | - | | 3.4392 | 21550 | 0.0 | - | | 3.4472 | 21600 | 0.0 | - | | 3.4552 | 21650 | 0.0 | - | | 3.4631 | 21700 | 0.0 | - | | 3.4711 | 21750 | 0.0 | - | | 3.4791 | 21800 | 0.0 | - | | 3.4871 | 21850 | 0.0 | - | | 3.4951 | 21900 | 0.0 | - | | 3.5030 | 21950 | 0.0 | - | | 3.5110 | 22000 | 0.0 | - | | 3.5190 | 22050 | 0.0 | - | | 3.5270 | 22100 | 0.0 | - | | 3.5350 | 22150 | 0.0 | - | | 3.5429 | 22200 | 0.0 | - | | 3.5509 | 22250 | 0.0 | - | | 3.5589 | 22300 | 0.0 | - | | 3.5669 | 22350 | 0.0 | - | | 3.5748 | 22400 | 0.0 | - | | 3.5828 | 22450 | 0.0 | - | | 3.5908 | 22500 | 0.0 | - | | 3.5988 | 22550 | 0.0 | - | | 3.6068 | 22600 | 0.0 | - | | 3.6147 | 22650 | 0.0 | - | | 3.6227 | 22700 | 0.0 | - | | 3.6307 | 22750 | 0.0 | - | | 3.6387 | 22800 | 0.0 | - | | 3.6467 | 22850 | 0.0 | - | | 3.6546 | 22900 | 0.0 | - | | 3.6626 | 22950 | 0.0 | - | | 3.6706 | 23000 | 0.0 | - | | 3.6786 | 23050 | 0.0 | - | | 3.6866 | 23100 | 0.0 | - | | 3.6945 | 23150 | 0.0 | - | | 3.7025 | 23200 | 0.0 | - | | 3.7105 | 23250 | 0.0 | - | | 3.7185 | 23300 | 0.0 | - | | 3.7265 | 23350 | 0.0 | - | | 3.7344 | 23400 | 0.0 | - | | 3.7424 | 23450 | 0.0 | - | | 3.7504 | 23500 | 0.0 | - | | 3.7584 | 23550 | 0.0 | - | | 3.7664 | 23600 | 0.0 | - | | 3.7743 | 23650 | 0.0 | - | | 3.7823 | 23700 | 0.0 | - | | 3.7903 | 23750 | 0.0 | - | | 3.7983 | 23800 | 0.0 | - | | 3.8063 | 23850 | 0.0 | - | | 3.8142 | 23900 | 0.0 | - | | 3.8222 | 23950 | 0.0 | - | | 3.8302 | 24000 | 0.0 | - | | 3.8382 | 24050 | 0.0 | - | | 3.8462 | 24100 | 0.0 | - | | 3.8541 | 24150 | 0.0 | - | | 3.8621 | 24200 | 0.0 | - | | 3.8701 | 24250 | 0.0 | - | | 3.8781 | 24300 | 0.0 | - | | 3.8861 | 24350 | 0.0 | - | | 3.8940 | 24400 | 0.0 | - | | 3.9020 | 24450 | 0.0 | - | | 3.9100 | 24500 | 0.0 | - | | 3.9180 | 24550 | 0.0 | - | | 3.9259 | 24600 | 0.0 | - | | 3.9339 | 24650 | 0.0 | - | | 3.9419 | 24700 | 0.0 | - | | 3.9499 | 24750 | 0.0 | - | | 3.9579 | 24800 | 0.0 | - | | 3.9658 | 24850 | 0.0 | - | | 3.9738 | 24900 | 0.0 | - | | 3.9818 | 24950 | 0.0 | - | | 3.9898 | 25000 | 0.0 | - | | 3.9978 | 25050 | 0.0 | - | | 4.0 | 25064 | - | 0.2557 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.2 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
AneeshJD/Aneesh
AneeshJD
2024-05-10T22:52:11Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-10T22:52:11Z
--- license: apache-2.0 ---
Nielzac/GPT2_Pirate
Nielzac
2024-05-10T22:49:47Z
3
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "license:mit", "region:us" ]
null
2024-05-10T22:49:43Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: openai-community/gpt2 model-index: - name: GPT2_Pirate_2024_05_10_20_25_24_lora_weightTrue_loraR32_optim_adamw_torch_epoch2_lr3e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT2_Pirate_2024_05_10_20_25_24_lora_weightTrue_loraR32_optim_adamw_torch_epoch2_lr3e-05 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 2.3874 | 0.0673 | 1000 | 1.9785 | | 1.7367 | 0.1346 | 2000 | 1.8709 | | 1.6208 | 0.2020 | 3000 | 1.8520 | | 1.5521 | 0.2693 | 4000 | 1.8361 | | 1.5165 | 0.3366 | 5000 | 1.8298 | | 1.484 | 0.4039 | 6000 | 1.8227 | | 1.4262 | 0.4712 | 7000 | 1.8078 | | 1.4701 | 0.5385 | 8000 | 1.8000 | | 1.4188 | 0.6059 | 9000 | 1.7938 | | 1.403 | 0.6732 | 10000 | 1.7908 | | 1.4115 | 0.7405 | 11000 | 1.7940 | | 1.4153 | 0.8078 | 12000 | 1.7888 | | 1.3903 | 0.8751 | 13000 | 1.7844 | | 1.3918 | 0.9424 | 14000 | 1.7830 | | 1.4018 | 1.0098 | 15000 | 1.7843 | | 1.3579 | 1.0771 | 16000 | 1.7777 | | 1.3803 | 1.1444 | 17000 | 1.7776 | | 1.3545 | 1.2117 | 18000 | 1.7778 | | 1.3557 | 1.2790 | 19000 | 1.7742 | | 1.3739 | 1.3463 | 20000 | 1.7769 | | 1.3538 | 1.4137 | 21000 | 1.7778 | | 1.3761 | 1.4810 | 22000 | 1.7763 | | 1.347 | 1.5483 | 23000 | 1.7735 | | 1.3579 | 1.6156 | 24000 | 1.7729 | | 1.3581 | 1.6829 | 25000 | 1.7734 | | 1.3472 | 1.7503 | 26000 | 1.7726 | | 1.3377 | 1.8176 | 27000 | 1.7752 | | 1.3243 | 1.8849 | 28000 | 1.7732 | | 1.369 | 1.9522 | 29000 | 1.7723 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
paulo037/stable-code-instruct-3b-spider-9000-steps
paulo037
2024-05-10T22:46:49Z
127
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T22:43:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mudogruer/Gemma-7b-MedMCQA
mudogruer
2024-05-10T22:41:14Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-7b-it", "base_model:adapter:google/gemma-7b-it", "region:us" ]
null
2024-05-10T22:40:07Z
--- library_name: peft base_model: google/gemma-7b-it --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
mudogruer/Mixtral-7x8B-MedMCQA
mudogruer
2024-05-10T22:39:29Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1", "region:us" ]
null
2024-05-10T22:39:00Z
--- library_name: peft base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
Xerror/Mimi-Chatbot
Xerror
2024-05-10T22:38:38Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:finetune:meta-llama/Meta-Llama-3-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-10T22:37:57Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: meta-llama/Meta-Llama-3-8B --- # Uploaded model - **Developed by:** Xerror - **License:** apache-2.0 - **Finetuned from model :** meta-llama/Meta-Llama-3-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-4bits
RichardErkhov
2024-05-10T22:36:30Z
80
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2309.00071", "arxiv:2402.08268", "arxiv:2305.14233", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-10T22:30:27Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-8B-Instruct-262k - bnb 4bits - Model creator: https://huggingface.co/gradientai/ - Original model: https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k/ Original model description: --- language: - en pipeline_tag: text-generation tags: - meta - llama-3 license: llama3 --- <img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/> # Llama-3 8B Gradient Instruct 262k Join our custom agent and long context (262k-1M+) waitlist: https://forms.gle/L6TDY7dozx8TuoUv7 Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model, drop us a message at contact@gradient.ai. [Join our Discord](https://discord.com/invite/2QVy2qt2mf) This model extends LLama-3 8B's context length from 8k to > 160K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta. **Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644fac0ce1d7a97f3b653ab1/s9T8L-6Jh5fYH6Q_88r3g.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by a new data-driven RoPE theta optimization technique - Progressive training on increasing context lengths similar to the [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 262144 tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. **Quantized versions and GGUF** GGUF is available on on Crusoe's huggingface account. Check it out here: [crusoeai/Llama-3-8B-Instruct-262k-GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-262k-GGUF) **Exl2 quantized versions** Exl2 is available on Bullerwins's huggingface account. Check it out here: [8.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_8.0bpw) [6.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_6.0bpw) [5.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_5.0bpw) **Updated Exl2 quants for 5/3 improved weights** [8.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_8.0bpw) [6.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_6.0bpw) [5.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_5.0bpw) **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We also fine-tune on a chat dataset based on UltraChat [4], following a similar recipe for data augmentation to [2]. **Progressive Training Details:** | Parameter | 65K | 262K | |-----------------------------|----------------|------------| | Initialize From | LLaMA-3-8B-Inst| 65K | | Sequence Length | 2^16 | 2^18 | | RoPE theta | 15.3 M | 207.1 M | | Batch Size (Tokens / Step) | 2.097 M | 4.192 M | | Steps | 30 | 24 | | Total Tokens | 63 M | 101 M | | Learning Rate | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | | GPU Type | NVIDIA L40S | NVIDIA L40S| **Evaluation Details:** ``` EVAL_MAX_CONTEXT_LENGTH=320200 EVAL_MIN_CONTEXT_LENGTH=100 EVAL_CONTEXT_INTERVAL=16000 EVAL_DEPTH_INTERVAL=0.2 EVAL_NUM_SAMPLES=2 EVAL_RND_NUMBER_DIGITS=8 HAYSTACK: EVAL_GENERATOR_TOKENS=925000 ``` Haystack is "haystack 3", further detailed in this [blog post](https://gradient.ai/blog/the-haystack-matters-for-niah-evals). ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [contact@gradient.ai](mailto:contact@gradient.ai) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext [4] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
FilipLarsson/wav2vec2-large-xlsr-53
FilipLarsson
2024-05-10T22:33:57Z
35
0
transformers
[ "transformers", "safetensors", "wav2vec2", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T12:12:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QinLiuNLP/mistral-poe-10p-20th-0.8
QinLiuNLP
2024-05-10T22:32:08Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-10T17:30:09Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 datasets: - HuggingFaceH4/ultrachat_200k model-index: - name: mistral-poe-10p-20th-0.8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-poe-10p-20th-0.8 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8806 | 1.0 | 1949 | nan | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
cloudyu/Mistral-9B-Instruct
cloudyu
2024-05-10T22:29:12Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:cc", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T05:07:36Z
--- license: cc --- this a 9B mistral model based on mistralai/Mistral-7B-Instruct-v0.2 and fine-tuned by vicgalle/alpaca-gpt4 it's MMLU is 61.71 , better than 60.78 of mistralai/Mistral-7B-Instruct-v0.2 example output ``` <s> write me a story about peppa pig. Peppa Pig is a lovable little cartoon character who lives in a house with her family, including her brother George and their mommy and daddy. She enjoys playing outside, going on adventures, and spending time with her friends. One sunny morning, Peppa woke up early and couldn't wait to start the day. She put on her favorite red dress and went outside to play in the garden. Her brother George was already there, playing with his toy cars. "Hey, Peppa! Want to race my car with you?" he asked. Peppa smiled and agreed. They lined up their toys and started racing around the garden. Peppa's car went faster than ever before, but she didn't let that stop her from having fun. When they finished racing, they played hide-and-seek together. After playing outside for a while, it was time for lunch. Mommy made them some delicious sandwiches, and they all sat down at the table to eat. After lunch, they watched a movie together on the couch. In the afternoon, Peppa and her family went to visit their friends, the Zebras. They had a great time playing games and eating snacks. When it was time to go home, they said goodbye to their friends and headed back to their own house. That night, after dinner, Peppa and her family cuddled up on the couch to watch TV together. It had been a wonderful day full of love, laughter, and adventure. And as they drifted off to sleep, they knew that tomorrow would be just as special. The end.</s> ```
bunnycore/Maverick-8B-Q5_K_M-GGUF
bunnycore
2024-05-10T22:24:17Z
3
0
null
[ "gguf", "merge", "mergekit", "lazymergekit", "llama-cpp", "gguf-my-repo", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-10T22:24:01Z
--- license: llama3 tags: - merge - mergekit - lazymergekit - llama-cpp - gguf-my-repo --- # bunnycore/Maverick-8B-Q5_K_M-GGUF This model was converted to GGUF format from [`bunnycore/Maverick-8B`](https://huggingface.co/bunnycore/Maverick-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/bunnycore/Maverick-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo bunnycore/Maverick-8B-Q5_K_M-GGUF --model maverick-8b.Q5_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo bunnycore/Maverick-8B-Q5_K_M-GGUF --model maverick-8b.Q5_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m maverick-8b.Q5_K_M.gguf -n 128 ```
stephenwalker/Llama3-8B-ORPO-1000-3e
stephenwalker
2024-05-10T22:23:11Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T22:17:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dbalasub/finetuned-t5-qanta
dbalasub
2024-05-10T22:22:26Z
108
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-10T22:21:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vyshnavid/emotions_gemma_ia3_im
vyshnavid
2024-05-10T22:19:41Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-06T05:01:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
erigala9875/gemma_peft_model_IA3_TEST_emotion_detection
erigala9875
2024-05-10T22:19:20Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-10T08:10:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OctaSpace/wav2vec2-large-finetuned
OctaSpace
2024-05-10T22:06:39Z
47
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "speech pretraining", "african", "fongbe", "swahili", "wolof", "ahmaric", "am", "sw", "wo", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-02-01T15:00:52Z
--- license: apache-2.0 language: - am - sw - wo tags: - speech pretraining - african - fongbe - swahili - wolof - wav2vec2 - ahmaric --- # Wav2vec2-large ## Model description This model is a pre-trained instance of the Wav2vec 2.0 architecture, specifically focused on processing and understanding four major African languages: Fongbe, Swahili, Amharic, and Wolof. The model leverages unlabelled audio data in these languages to learn rich, language-specific representations before any fine-tuning on downstream tasks. ## Training data The model was pre-trained using a diverse set of audio recordings from [ALFFA](https://github.com/getalp/ALFFA_PUBLIC) dataset. * Fongbe: A Gbe language, primarily spoken in Benin and parts of Nigeria and Togo. * Swahili: A Bantu language, widely spoken across East Africa including Tanzania, Kenya, Uganda, Rwanda, and Burundi. * Amharic: The official language of Ethiopia, belonging to the Semitic branch of the Afroasiatic language family. * Wolof: Predominantly spoken in Senegal, The Gambia, and Mauritania. ## Model Architecture This model uses the large version of wav2vec 2.0 architecture developed by Facebook AI, which includes a multi-layer convolutional neural network that processes raw audio signals to produce contextual representations. These representations are then used to predict the original audio input before any labels are provided, following a self-supervised training methodology. ## Usage This model is intended for use in Automatic Speech Recognition (ASR), audio classification, and other audio-related tasks in Fongbe, Swahili, Amharic, and Wolof. To use this model for fine-tuning on a specific task, you can load it via the Hugging Face Transformers library: ``` from transformers import Wav2Vec2Processor, Wav2Vec2Model processor = Wav2Vec2Processor.from_pretrained("your-username/wav2vec2-african-languages") model = Wav2Vec2Model.from_pretrained("your-username/wav2vec2-african-languages") ``` ## Performance The model's performance was evaluated using a held-out validation set of audio recordings. The effectiveness of the pre-trained representations was measured in terms of their ability to be fine-tuned to specific tasks such as ASR. Note that detailed performance metrics will depend on the specifics of the fine-tuning process and the quality of the labeled data used. ## Limitations The model might exhibit variability in performance across different languages due to varying amounts of training data available for each language. Performance may degrade with audio inputs that significantly differ from the types of recordings seen during training (e.g., telephone quality audio, noisy environments).
silencer107/bobik2-7b
silencer107
2024-05-10T21:57:18Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "SanjiWatsuki/Silicon-Maid-7B", "senseable/WestLake-7B-v2", "base_model:SanjiWatsuki/Silicon-Maid-7B", "base_model:merge:SanjiWatsuki/Silicon-Maid-7B", "base_model:senseable/WestLake-7B-v2", "base_model:merge:senseable/WestLake-7B-v2", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T21:55:18Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - SanjiWatsuki/Silicon-Maid-7B - senseable/WestLake-7B-v2 base_model: - SanjiWatsuki/Silicon-Maid-7B - senseable/WestLake-7B-v2 model-index: - name: RolePlayLake-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.56 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.42 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.55 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 64.38 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.27 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 65.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/RolePlayLake-7B name: Open LLM Leaderboard --- # RolePlayLake-7B RolePlayLake-7B is a merge of the following models : * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) `In my current testing RolePlayLake is Better than Silicon_Maid in RP and More Uncensored Than WestLake` `I would try to only merge Uncensored Models with Baising towards Chat rather than Instruct ` ## 🧩 Configuration ```yaml slices: - sources: - model: SanjiWatsuki/Silicon-Maid-7B layer_range: [0, 32] - model: senseable/WestLake-7B-v2 layer_range: [0, 32] merge_method: slerp base_model: senseable/WestLake-7B-v2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "fhai50032/RolePlayLake-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # Why I Merged WestLake and Silicon Maid Merged WestLake and Silicon Maid for a unique blend: 1. **EQ-Bench Dominance:** WestLake's 79.75 EQ-Bench score. (Maybe Contaminated) 2. **Charm and Role-Play:** Silicon's explicit charm and WestLake's role-play prowess. 3. **Config Synergy:** Supports lots of prompt format out of the gate and has a very nice synergy Result: RolePlayLake-7B, a linguistic fusion with EQ-Bench supremacy and captivating role-play potential. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__RolePlayLake-7B) | Metric |Value| |---------------------------------|----:| |Avg. |72.54| |AI2 Reasoning Challenge (25-Shot)|70.56| |HellaSwag (10-Shot) |87.42| |MMLU (5-Shot) |64.55| |TruthfulQA (0-shot) |64.38| |Winogrande (5-shot) |83.27| |GSM8k (5-shot) |65.05|
maxjahn/pretrained-for-oracle
maxjahn
2024-05-10T21:49:21Z
0
0
null
[ "onnx", "license:mit", "region:us" ]
null
2024-05-10T15:12:55Z
--- license: mit --- Collection of pretrained models suitable for use in Oracle Machine Learning (OML) converted into ONNX format ready for import.
goliaro/opt-125m-lora
goliaro
2024-05-10T21:45:41Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-10T21:45:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
blogging-enthusiasts/jimbot-8B-lora
blogging-enthusiasts
2024-05-10T21:41:59Z
2
1
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:theo77186/Llama-3-8B-Instruct-norefusal", "base_model:adapter:theo77186/Llama-3-8B-Instruct-norefusal", "region:us" ]
null
2024-05-10T21:38:41Z
--- library_name: peft base_model: theo77186/Llama-3-8B-Instruct-norefusal --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
shyamieee/B3E3-SLM-7b-v2.0
shyamieee
2024-05-10T21:40:34Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2212.04089", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T21:00:25Z
--- base_model: [] library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # B3E3_SLM_7b_v2_folder This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using YamshadowExperiment28-7B as a base. ### Models Merged The following models were included in the merge: * Calme-7B-Instruct-v0.9 * multi_verse_model ### Configuration