modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-02 12:32:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-02 12:31:20
card
stringlengths
11
1.01M
crs0256/dqn-BeamRiderNoFrameskip-v4_2
crs0256
2024-05-13T23:06:39Z
4
0
stable-baselines3
[ "stable-baselines3", "BeamRiderNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-13T23:05:47Z
--- library_name: stable-baselines3 tags: - BeamRiderNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: BeamRiderNoFrameskip-v4 type: BeamRiderNoFrameskip-v4 metrics: - type: mean_reward value: 264.00 +/- 159.86 name: mean_reward verified: false --- # **DQN** Agent playing **BeamRiderNoFrameskip-v4** This is a trained model of a **DQN** agent playing **BeamRiderNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga crs0256 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga crs0256 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/ -orga crs0256 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000), ('normalize', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
mradermacher/Yi-1.5-6B-Chat-GGUF
mradermacher
2024-05-13T23:04:29Z
38
0
transformers
[ "transformers", "gguf", "en", "base_model:01-ai/Yi-1.5-6B-Chat", "base_model:quantized:01-ai/Yi-1.5-6B-Chat", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-13T22:42:06Z
--- base_model: 01-ai/Yi-1.5-6B-Chat language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/01-ai/Yi-1.5-6B-Chat <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.Q2_K.gguf) | Q2_K | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.IQ3_XS.gguf) | IQ3_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.Q3_K_S.gguf) | Q3_K_S | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.IQ3_S.gguf) | IQ3_S | 2.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.IQ3_M.gguf) | IQ3_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.Q3_K_M.gguf) | Q3_K_M | 3.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.Q3_K_L.gguf) | Q3_K_L | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.IQ4_XS.gguf) | IQ4_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.Q4_K_S.gguf) | Q4_K_S | 3.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.Q4_K_M.gguf) | Q4_K_M | 3.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.Q5_K_S.gguf) | Q5_K_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.Q5_K_M.gguf) | Q5_K_M | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.Q6_K.gguf) | Q6_K | 5.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.Q8_0.gguf) | Q8_0 | 6.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-6B-Chat-GGUF/resolve/main/Yi-1.5-6B-Chat.f16.gguf) | f16 | 12.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
maneln/tinyllama
maneln
2024-05-13T22:58:16Z
49
0
transformers
[ "transformers", "safetensors", "llama", "question-answering", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
question-answering
2024-05-13T22:53:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Plmanwaring/ADR_Detector
Plmanwaring
2024-05-13T22:54:00Z
108
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-13T22:53:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
adarshheg/sft_finetuned_adapters_timesheet
adarshheg
2024-05-13T22:52:08Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-13T22:51:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bartowski/Cat-Llama-3-70B-instruct-GGUF
bartowski
2024-05-13T22:51:57Z
306
10
null
[ "gguf", "text-generation", "license:llama3", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-05-13T20:21:19Z
--- license: llama3 quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of Cat-Llama-3-70B-instruct Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2854">b2854</a> for quantization. Original model: https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct/ All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant <|im_end|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Cat-Llama-3-70B-instruct-Q8_0.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/tree/main/Cat-Llama-3-70B-instruct-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. | | [Cat-Llama-3-70B-instruct-Q6_K.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/tree/main/Cat-Llama-3-70B-instruct-Q6_K.gguf) | Q6_K | 57.88GB | Very high quality, near perfect, *recommended*. | | [Cat-Llama-3-70B-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. | | [Cat-Llama-3-70B-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-Q5_K_S.gguf) | Q5_K_S | 48.65GB | High quality, *recommended*. | | [Cat-Llama-3-70B-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Cat-Llama-3-70B-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-Q4_K_S.gguf) | Q4_K_S | 40.34GB | Slightly lower quality with more space savings, *recommended*. | | [Cat-Llama-3-70B-instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ4_NL.gguf) | IQ4_NL | 40.05GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Cat-Llama-3-70B-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Cat-Llama-3-70B-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-Q3_K_L.gguf) | Q3_K_L | 37.14GB | Lower quality but usable, good for low RAM availability. | | [Cat-Llama-3-70B-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. | | [Cat-Llama-3-70B-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Cat-Llama-3-70B-instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ3_S.gguf) | IQ3_S | 30.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Cat-Llama-3-70B-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. | | [Cat-Llama-3-70B-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ3_XS.gguf) | IQ3_XS | 29.30GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Cat-Llama-3-70B-instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Cat-Llama-3-70B-instruct-Q2_K.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. | | [Cat-Llama-3-70B-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Cat-Llama-3-70B-instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ2_S.gguf) | IQ2_S | 22.24GB | Very low quality, uses SOTA techniques to be usable. | | [Cat-Llama-3-70B-instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Very low quality, uses SOTA techniques to be usable. | | [Cat-Llama-3-70B-instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ2_XXS.gguf) | IQ2_XXS | 19.09GB | Lower quality, uses SOTA techniques to be usable. | | [Cat-Llama-3-70B-instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. | | [Cat-Llama-3-70B-instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Cat-Llama-3-70B-instruct-GGUF/blob/main/Cat-Llama-3-70B-instruct-IQ1_S.gguf) | IQ1_S | 15.34GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Cat-Llama-3-70B-instruct-GGUF --include "Cat-Llama-3-70B-instruct-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Cat-Llama-3-70B-instruct-GGUF --include "Cat-Llama-3-70B-instruct-Q8_0.gguf/*" --local-dir Cat-Llama-3-70B-instruct-Q8_0 --local-dir-use-symlinks False ``` You can either specify a new local-dir (Cat-Llama-3-70B-instruct-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
mnoukhov/pythia-2.8b-dpo_hh_rlhf
mnoukhov
2024-05-13T22:47:53Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:mnoukhov/pythia-2.8b-sft_hh_rlhf", "base_model:adapter:mnoukhov/pythia-2.8b-sft_hh_rlhf", "region:us" ]
null
2024-05-01T21:35:39Z
--- library_name: peft tags: - generated_from_trainer base_model: mnoukhov/pythia-2.8b-sft_hh_rlhf model-index: - name: pythia-2.8b-dpo_hh_rlhf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pythia-2.8b-dpo_hh_rlhf This model is a fine-tuned version of [mnoukhov/pythia-2.8b-sft_hh_rlhf](https://huggingface.co/mnoukhov/pythia-2.8b-sft_hh_rlhf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6257 - Rewards/chosen: -0.6630 - Rewards/rejected: -1.0092 - Rewards/accuracies: 0.6427 - Rewards/margins: 0.3462 - Logps/rejected: -143.9225 - Logps/chosen: -143.9225 - Logps/ref Rejected: -134.7867 - Logps/ref Chosen: -137.2922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logps/ref Rejected | Logps/ref Chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:------------------:|:----------------:| | 0.6591 | 0.2 | 503 | 0.6625 | -0.2937 | -0.4623 | 0.5935 | 0.1685 | -140.2295 | -140.2295 | -134.7867 | -137.2922 | | 0.6446 | 0.4 | 1006 | 0.6412 | -0.3936 | -0.6444 | 0.6209 | 0.2507 | -141.2284 | -141.2284 | -134.7867 | -137.2922 | | 0.6289 | 0.6 | 1509 | 0.6296 | -0.6299 | -0.9621 | 0.6365 | 0.3322 | -143.5909 | -143.5909 | -134.7867 | -137.2922 | | 0.6215 | 0.8 | 2012 | 0.6257 | -0.6630 | -1.0092 | 0.6427 | 0.3462 | -143.9225 | -143.9225 | -134.7867 | -137.2922 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
Amna100/fold_8
Amna100
2024-05-13T22:47:38Z
106
0
transformers
[ "transformers", "safetensors", "deberta", "token-classification", "generated_from_trainer", "base_model:Amna100/PreTraining-MLM", "base_model:finetune:Amna100/PreTraining-MLM", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-13T22:09:05Z
--- license: mit base_model: Amna100/PreTraining-MLM tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: fold_8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/lvieenf2) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/fgis28rc) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/9tw0vsla) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/ccjl3n87) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/geyuezlx) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/sv9tcfx8) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/9rg5cz4h) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/3fdbnjrq) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/l78entvo) # fold_8 This model is a fine-tuned version of [Amna100/PreTraining-MLM](https://huggingface.co/Amna100/PreTraining-MLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0112 - Precision: 0.7492 - Recall: 0.5704 - F1: 0.6476 - Accuracy: 0.9993 - Roc Auc: 0.9930 - Pr Auc: 0.9998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Roc Auc | Pr Auc | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------:|:------:| | 0.0359 | 1.0 | 632 | 0.0112 | 0.7492 | 0.5704 | 0.6476 | 0.9993 | 0.9930 | 0.9998 | | 0.0115 | 2.0 | 1264 | 0.0113 | 0.7412 | 0.6910 | 0.7152 | 0.9994 | 0.9870 | 0.9995 | | 0.0069 | 3.0 | 1896 | 0.0134 | 0.6174 | 0.7663 | 0.6839 | 0.9992 | 0.9923 | 0.9998 | | 0.0021 | 4.0 | 2528 | 0.0148 | 0.7971 | 0.6910 | 0.7402 | 0.9994 | 0.9787 | 0.9994 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
crs0256/dqn-SpaceInvadersNoFrameskip-v2
crs0256
2024-05-13T22:34:54Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-13T22:34:27Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 5.00 +/- 7.07 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga crs0256 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga crs0256 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga crs0256 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
OmnicromsBrain/StoryFusion-7B
OmnicromsBrain
2024-05-13T22:34:22Z
9
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "kasper52786/StoryWeaver-7b-Instruct-v0.1", "N8Programs/Coxcomb", "Norquinal/Mistral-7B-storywriter", "base_model:N8Programs/Coxcomb", "base_model:merge:N8Programs/Coxcomb", "base_model:Norquinal/Mistral-7B-storywriter", "base_model:merge:Norquinal/Mistral-7B-storywriter", "base_model:kasper52786/StoryWeaver-7b-Instruct-v0.1", "base_model:merge:kasper52786/StoryWeaver-7b-Instruct-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-09T04:10:21Z
--- tags: - merge - mergekit - kasper52786/StoryWeaver-7b-Instruct-v0.1 - N8Programs/Coxcomb - Norquinal/Mistral-7B-storywriter base_model: - kasper52786/StoryWeaver-7b-Instruct-v0.1 - N8Programs/Coxcomb - Norquinal/Mistral-7B-storywriter --- # StoryFusion-7B StoryFusion-7B is a merge of the following models: * [kasper52786/StoryWeaver-7b-Instruct-v0.1](https://huggingface.co/kasper52786/StoryWeaver-7b-Instruct-v0.1) * [N8Programs/Coxcomb](https://huggingface.co/N8Programs/Coxcomb) * [Norquinal/Mistral-7B-storywriter](https://huggingface.co/Norquinal/Mistral-7B-storywriter) ## &#9889; Quantized Models Thanks to MRadermacher for the quantized models **.GGUF** https://huggingface.co/mradermacher/StoryFusion-7B-GGUF ## 🧩 Configuration ```yaml models: - model: senseable/WestLake-7B-v2 # No parameters necessary for base model - model: kasper52786/StoryWeaver-7b-Instruct-v0.1 parameters: density: 0.53 weight: 0.4 - model: N8Programs/Coxcomb parameters: density: 0.53 weight: 0.3 - model: Norquinal/Mistral-7B-storywriter parameters: density: 0.53 weight: 0.3 merge_method: dare_ties base_model: senseable/WestLake-7B-v2 parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "OmnicromsBrain/StoryFusion-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
RobertoFuentesRisco/dqn-SSpaceInvadersNoFrameskip-v4
RobertoFuentesRisco
2024-05-13T22:29:26Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-13T22:28:55Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 295.00 +/- 93.70 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RobertoFuentesRisco -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RobertoFuentesRisco -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RobertoFuentesRisco ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', 'stable_baselines3.common.atari_wrappers.AtariWrapper'), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 500000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Teamj/test_wav2vec2_ESC50
Teamj
2024-05-13T22:27:36Z
160
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-05-13T22:27:19Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: test_wav2vec2_ESC50 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_wav2vec2_ESC50 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0326 - Accuracy: 0.7104 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 2.1292 | 0.9863 | 54 | 2.1314 | 0.2576 | | 1.7714 | 1.9909 | 109 | 1.7812 | 0.4087 | | 1.5023 | 2.9954 | 164 | 1.6238 | 0.4951 | | 1.3402 | 4.0 | 219 | 1.3518 | 0.6022 | | 1.1629 | 4.9863 | 273 | 1.2371 | 0.6440 | | 1.0466 | 5.9909 | 328 | 1.1422 | 0.6789 | | 0.9478 | 6.9954 | 383 | 1.0848 | 0.6875 | | 0.879 | 8.0 | 438 | 1.1490 | 0.6577 | | 0.8719 | 8.9863 | 492 | 1.0175 | 0.7121 | | 0.8069 | 9.8630 | 540 | 1.0326 | 0.7104 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
quangtqv/bi_encoder_tool_learning_14_5_2024_v6
quangtqv
2024-05-13T22:21:40Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-13T22:21:08Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # quangtqv/bi_encoder_tool_learning_14_5_2024_v6 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('quangtqv/bi_encoder_tool_learning_14_5_2024_v6') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=quangtqv/bi_encoder_tool_learning_14_5_2024_v6) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
kanaluvu/phi-1_5-prompted-finetuned
kanaluvu
2024-05-13T22:20:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-13T22:20:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ltorresglez/TablasdeSanAndres
ltorresglez
2024-05-13T22:19:07Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2024-05-13T22:19:07Z
--- license: artistic-2.0 ---
uisikdag/robin
uisikdag
2024-05-13T22:10:24Z
6
0
diffusers
[ "diffusers", "safetensors", "diffusers:DDPMPipeline", "region:us" ]
null
2024-04-29T04:40:28Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> House plans #trained on:WhiteCase ### Model Description <!-- Provide a longer summary of what this model is. --> ## Model Details ```python class TrainingConfig: image_size = 192 # the generated image resolution train_batch_size = 16 eval_batch_size = 16 # how many images to sample during evaluation num_epochs = 200 gradient_accumulation_steps = 1 learning_rate = 1e-4 lr_warmup_steps = 500 save_image_epochs = 10 save_model_epochs = 30 mixed_precision = 'fp16' # `no` for float32, `fp16` for automatic mixed precision output_dir = 'ddpm-butterflies-128' # the model namy locally and on the HF Hub push_to_hub = True # whether to upload the saved model to the HF Hub hub_private_repo = False overwrite_output_dir = True # overwrite the old model when re-running the notebook seed = 0 config = TrainingConfig()
A-Magdy/cl_ql
A-Magdy
2024-05-13T22:09:29Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T19:58:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rvukasin/Reinforce-CartPole-v1
rvukasin
2024-05-13T22:06:58Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-13T22:06:49Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
uisikdag/robin2b
uisikdag
2024-05-13T22:06:42Z
3
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
2024-05-13T21:49:33Z
--- library_name: diffusers widget: - text: "Sample Images" output: url: 249.png --- House Plan #trainedOn: WhiteCase # Model Card for Model ID ```python class TrainingConfig: image_size = 256 # the generated image resolution train_batch_size = 8 eval_batch_size = 8 # how many images to sample during evaluation num_epochs = 250 gradient_accumulation_steps = 1 learning_rate = 1e-4 lr_warmup_steps = 500 save_image_epochs = 10 save_model_epochs = 30 mixed_precision = 'fp16' # `no` for float32, `fp16` for automatic mixed precision output_dir = 'ddpm-butterflies-128' # the model namy locally and on the HF Hub push_to_hub = False # whether to upload the saved model to the HF Hub hub_private_repo = False overwrite_output_dir = False # overwrite the old model when re-running the notebook seed = 0 ``` ## Model Details
sinensia/distilbert-finetuned-imdb
sinensia
2024-05-13T22:00:33Z
108
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-13T21:57:34Z
--- license: mit --- Model Distilbert fine-tuned with imdb reviews for sentiment analysis from transformers import DistilBertForSequenceClassification, DistilBertTokenizerFast model = DistilBertForSequenceClassification.from_pretrained("./distilbert-finetuned-imdb") tokenizer = DistilBertTokenizerFast.from_pretrained("./distilbert-finetuned-imdb") Epoch Training Loss Validation Loss Accuracy 1 0.224900 0.223101 0.913040 2 0.152700 0.241866 0.929960 3 0.089300 0.280867 0.932440 TrainOutput(global_step=4689, training_loss=0.16640237623960333, metrics={'train_runtime': 2933.7121, 'train_samples_per_second': 25.565, 'train_steps_per_second': 1.598, 'total_flos': 9935054899200000.0, 'train_loss': 0.16640237623960333, 'epoch': 3.0}) {'eval_loss': 0.28086698055267334, 'eval_accuracy': 0.93244, 'eval_runtime': 246.0564, 'eval_samples_per_second': 101.603, 'eval_steps_per_second': 6.352, 'epoch': 3.0}
Takeshi10Days/distilbert-base-uncased-distilled-clinc
Takeshi10Days
2024-05-13T21:56:28Z
109
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-13T21:47:44Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - clinc_oos model-index: - name: distilbert-base-uncased-distilled-clinc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 318 | 1.3258 | | 1.6434 | 2.0 | 636 | 0.6673 | | 1.6434 | 3.0 | 954 | 0.3963 | | 0.6154 | 4.0 | 1272 | 0.2982 | | 0.3057 | 5.0 | 1590 | 0.2632 | | 0.3057 | 6.0 | 1908 | 0.2488 | | 0.231 | 7.0 | 2226 | 0.2425 | | 0.2073 | 8.0 | 2544 | 0.2382 | | 0.2073 | 9.0 | 2862 | 0.2366 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.0 - Datasets 1.11.0 - Tokenizers 0.15.1
tuquyennnn/LoRA-FlanT5-small-v2
tuquyennnn
2024-05-13T21:55:42Z
14
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:google/flan-t5-small", "base_model:adapter:google/flan-t5-small", "license:apache-2.0", "region:us" ]
null
2024-05-13T18:53:33Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: google/flan-t5-small model-index: - name: LoRA-FlanT5-small-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA-FlanT5-small-v2 This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1186 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.266 | 0.32 | 250 | 0.1635 | | 0.181 | 0.64 | 500 | 0.1331 | | 0.1639 | 0.96 | 750 | 0.1239 | | 0.1576 | 1.28 | 1000 | 0.1211 | | 0.154 | 1.61 | 1250 | 0.1196 | | 0.1527 | 1.93 | 1500 | 0.1192 | | 0.1535 | 2.25 | 1750 | 0.1189 | | 0.1527 | 2.57 | 2000 | 0.1188 | | 0.1525 | 2.89 | 2250 | 0.1187 | | 0.153 | 3.21 | 2500 | 0.1187 | | 0.153 | 3.53 | 2750 | 0.1186 | | 0.1519 | 3.85 | 3000 | 0.1186 | | 0.152 | 4.17 | 3250 | 0.1186 | | 0.1508 | 4.49 | 3500 | 0.1186 | | 0.1531 | 4.82 | 3750 | 0.1186 | | 0.1513 | 5.14 | 4000 | 0.1186 | | 0.1513 | 5.46 | 4250 | 0.1186 | | 0.1518 | 5.78 | 4500 | 0.1186 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.19.1 - Tokenizers 0.15.2
quangtqv/bi_encoder_tool_learning_14_5_2024_v4
quangtqv
2024-05-13T21:54:20Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-13T21:53:49Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # quangtqv/bi_encoder_tool_learning_14_5_2024_v4 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('quangtqv/bi_encoder_tool_learning_14_5_2024_v4') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=quangtqv/bi_encoder_tool_learning_14_5_2024_v4) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
FortiaVila/fine_tune_1
FortiaVila
2024-05-13T21:51:14Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-13T21:50:58Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # FortiaVila/fine_tune_1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('FortiaVila/fine_tune_1') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=FortiaVila/fine_tune_1) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 51 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 20, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
damgomz/ThunBERT_bs8_lr5_MLM
damgomz
2024-05-13T21:46:34Z
107
0
transformers
[ "transformers", "safetensors", "albert", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-11T09:32:28Z
--- language: en tags: - fill-mask kwargs: timestamp: '2024-05-13T23:46:31' project_name: ThunBERT_bs8_lr5_MLM_emissions_tracker run_id: 761a7792-77be-4658-b626-ebb9f6540510 duration: 222194.28785967827 emissions: 0.2325671263715923 emissions_rate: 1.0466836416535818e-06 cpu_power: 42.5 gpu_power: 0.0 ram_power: 37.5 cpu_energy: 2.6231219651492017 gpu_energy: 0 ram_energy: 2.314504122865692 energy_consumed: 4.937626088014953 country_name: Switzerland country_iso_code: CHE region: .nan cloud_provider: .nan cloud_region: .nan os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34 python_version: 3.10.4 codecarbon_version: 2.3.4 cpu_count: 4 cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz gpu_count: .nan gpu_model: .nan longitude: .nan latitude: .nan ram_total_size: 100 tracking_mode: machine on_cloud: N pue: 1.0 --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 222194.28785967827 | | Emissions (Co2eq in kg) | 0.2325671263715923 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 37.5 | | CPU energy (kWh) | 2.6231219651492017 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 2.314504122865692 | | Consumed energy (kWh) | 4.937626088014953 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 4 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.42772400412988065 | | Emissions (Co2eq in kg) | 0.08702609607837399 | ## Note 11 May 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ThunBERT_bs8_lr5_MLM | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 5e-05 | | batch_size | 8 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 82533 | ## Training and Testing steps Epoch | Train Loss | Test Loss ---|---|--- | 0.0 | 10.169299 | 3.893429 | | 0.5 | 2.974144 | 2.843656 | | 1.0 | 2.783727 | 2.781645 | | 1.5 | 2.678597 | 2.669879 | | 2.0 | 2.595359 | 2.603173 | | 2.5 | 2.525631 | 2.518131 | | 3.0 | 2.453879 | 2.475023 | | 3.5 | 2.397467 | 2.423880 | | 4.0 | 2.351636 | 2.381376 | | 4.5 | 2.294014 | 2.340238 | | 5.0 | 2.253481 | 2.299134 | | 5.5 | 2.208159 | 2.264561 | | 6.0 | 2.192403 | 2.241567 |
afrideva/Morph-Caller-GGUF
afrideva
2024-05-13T21:46:17Z
4
0
null
[ "gguf", "morph-caller", "function-calling", "ggml", "quantized", "text-generation", "en", "base_model:Morpheus-Function-Calling/Morph-Caller", "base_model:quantized:Morpheus-Function-Calling/Morph-Caller", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-13T20:53:50Z
--- base_model: Morpheus-Function-Calling/Morph-Caller inference: true language: - en license: mit model_creator: Morpheus-Function-Calling model_name: Morph-Caller pipeline_tag: text-generation quantized_by: afrideva tags: - morph-caller - function-calling - gguf - ggml - quantized --- # Morph-Caller-GGUF Quantized GGUF model files for [Morph-Caller](https://huggingface.co/Morpheus-Function-Calling/Morph-Caller) from [Morpheus-Function-Calling](https://huggingface.co/Morpheus-Function-Calling) ## Original Model Card: # Morph-Caller Model Card ## Model Description Morph-Caller is a state-of-the-art language model designed to perform function calling using a structured schema. It leverages a sophisticated system to parse and execute function calls, providing users with the ability to interact with the model in a more dynamic and utilitarian manner. The model differentiates itself by both excelling at function calling and also being unrestrained from model censorship. ## Capabilities Morph-Caller excels in understanding and generating structured outputs based on function calling schemas. It can interpret user queries that involve function calls and respond with accurate and relevant information. The model adheres to a predefined JSON schema for function calls, which is detailed in the [Hermes-Function-Calling GitHub repository](https://github.com/NousResearch/Hermes-Function-Calling). ## Usage To interact with Morph-Caller, users should format their prompts according to the function calling schema provided in the repository. The model can process these prompts and return structured data, making it an invaluable tool for developers and researchers who require programmatic access to language model capabilities. ## Installation and Usage Please refer to the [Hermes-Function-Calling GitHub repository](https://github.com/NousResearch/Hermes-Function-Calling) for detailed instructions on installation and usage. ## Contributions Morph-Caller is built on the principles of open collaboration. We encourage contributions that improve the model's performance and extend its capabilities. If you are interested in contributing, please follow the contribution guidelines in the repository. ## License This model is available under the MIT license, which allows for a wide range of uses with few restrictions.
quangtqv/bi_encoder_tool_learning_14_5_2024_v2
quangtqv
2024-05-13T21:43:10Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-13T21:42:49Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # quangtqv/bi_encoder_tool_learning_14_5_2024_v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('quangtqv/bi_encoder_tool_learning_14_5_2024_v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=quangtqv/bi_encoder_tool_learning_14_5_2024_v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
quangtqv/bi_encoder_tool_learning_14_5_2024_v3
quangtqv
2024-05-13T21:42:18Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-13T21:41:57Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # quangtqv/bi_encoder_tool_learning_14_5_2024_v3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('quangtqv/bi_encoder_tool_learning_14_5_2024_v3') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=quangtqv/bi_encoder_tool_learning_14_5_2024_v3) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
asajjad/peft_steam_reviews_sentiment
asajjad
2024-05-13T21:38:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-13T21:34:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rwillh11/mDeBERTa-EAD-conflict-bilingual-small
rwillh11
2024-05-13T21:38:12Z
104
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-09T20:42:19Z
Accuracy .85 Balance Accuray: .81 F1: .78
dfoc99/scibert_scivocab_uncased-finetuned-Astronomy_Thesaurus
dfoc99
2024-05-13T21:36:38Z
3
0
peft
[ "peft", "pytorch", "tensorboard", "safetensors", "generated_from_trainer", "base_model:allenai/scibert_scivocab_uncased", "base_model:adapter:allenai/scibert_scivocab_uncased", "region:us" ]
null
2024-05-10T13:05:30Z
--- library_name: peft tags: - generated_from_trainer base_model: allenai/scibert_scivocab_uncased model-index: - name: scibert_scivocab_uncased-finetuned-Astronomy_Thesaurus results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scibert_scivocab_uncased-finetuned-Astronomy_Thesaurus This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3610 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5303 | 1.0 | 879 | 1.3800 | | 1.4977 | 2.0 | 1758 | 1.4101 | | 1.4942 | 3.0 | 2637 | 1.3678 | | 1.4662 | 4.0 | 3516 | 1.3817 | | 1.4748 | 5.0 | 4395 | 1.3750 | | 1.4629 | 6.0 | 5274 | 1.3611 | | 1.4763 | 7.0 | 6153 | 1.3643 | | 1.453 | 8.0 | 7032 | 1.3713 | | 1.4603 | 9.0 | 7911 | 1.3871 | | 1.4697 | 10.0 | 8790 | 1.3610 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
hitrate/llama3-test
hitrate
2024-05-13T21:25:14Z
0
0
null
[ "region:us" ]
null
2024-05-13T21:21:56Z
<p align="center"> <img src="https://github.com/meta-llama/llama3/blob/main/Llama3_Repo.jpeg" width="400"/> </p> <p align="center"> 🤗 <a href="https://huggingface.co/meta-Llama"> Models on Hugging Face</a>&nbsp | <a href="https://ai.meta.com/blog/"> Blog</a>&nbsp | <a href="https://llama.meta.com/">Website</a>&nbsp | <a href="https://llama.meta.com/get-started/">Get Started</a>&nbsp <br> --- # Meta Llama 3 We are unlocking the power of large language models. Our latest version of Llama is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and instruction tuned Llama 3 language models — including sizes of 8B to 70B parameters. This repository is intended as a minimal example to load Llama 3 models and run inference. For more detailed examples, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). ## Download In order to download the model weights and tokenizer, please visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Access to Hugging Face We are also providing downloads on [Hugging Face](https://huggingface.co/meta-llama), in both transformers and native `llama3` formats. To download the weights from Hugging Face, please follow these steps: - Visit one of the repos, for example [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). - Read and accept the license. Once your request is approved, you'll be granted access to all the Llama 3 models. Note that requests used to take up to one hour to get processed. - To download the original native weights to use with this repo, click on the "Files and versions" tab and download the contents of the `original` folder. You can also download them from the command line if you `pip install huggingface-hub`: ```bash huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct ``` - To use with transformers, the following [pipeline](https://huggingface.co/docs/transformers/en/main_classes/pipelines) snippet will download and cache the weights: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model="meta-llama/Meta-Llama-3-8B-Instruct", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) ``` ## Quick Start You can follow the steps below to quickly get up and running with Llama 3 models. These steps will let you run quick inference locally. For more examples, see the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes). 1. In a conda env with PyTorch / CUDA available clone and download this repository. 2. In the top-level directory run: ```bash pip install -e . ``` 3. Visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option but rather make sure to manually copy the link from the email. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: ```bash torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir Meta-Llama-3-8B-Instruct/ \ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` **Note** - Replace `Meta-Llama-3-8B-Instruct/` with the path to your checkpoint directory and `Meta-Llama-3-8B-Instruct/tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository but you can change that to a different .py file. ## Inference Different models require different model-parallel (MP) values: | Model | MP | |--------|----| | 8B | 1 | | 70B | 8 | All models support sequence length up to 8192 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. ### Pretrained Models These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-3-8b model (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir Meta-Llama-3-8B/ \ --tokenizer_path Meta-Llama-3-8B/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ``` ### Instruction-tuned Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, a specific formatting defined in [`ChatFormat`](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py#L202) needs to be followed: The prompt begins with a `<|begin_of_text|>` special token, after which one or more messages follow. Each message starts with the `<|start_header_id|>` tag, the role `system`, `user` or `assistant`, and the `<|end_header_id|>` tag. After a double newline `\n\n` the contents of the message follow. The end of each message is marked by the `<|eot_id|>` token. You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/meta-llama/llama-recipes/blob/main/recipes/inference/local_inference/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-3-8b-chat: ``` torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir Meta-Llama-3-8B-Instruct/ \ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` Llama 3 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](https://ai.meta.com/static-resource/responsible-use-guide/). ## Issues Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [https://github.com/meta-llama/llama3/issues](https://github.com/meta-llama/llama3/issues) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md). ## License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## Questions For common questions, the FAQ can be found [here](https://llama.meta.com/faq) which will be kept up to date over time as new questions arise.
LujainAbdulrahman/llama3-lora-AE2
LujainAbdulrahman
2024-05-13T21:12:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-13T21:12:35Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** LujainAbdulrahman - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ethankasa/phi-2-glaive
ethankasa
2024-05-13T21:07:14Z
2
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:other", "region:us" ]
null
2024-05-13T21:07:08Z
--- license: other library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: glaiveNew results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # glaiveNew This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the trivia dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
Minaaaa/qa_distilbert
Minaaaa
2024-05-13T21:06:33Z
166
0
transformers
[ "transformers", "safetensors", "distilbert", "question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
question-answering
2024-05-13T21:06:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
silveroxides/CLIP-ViT-bigG-14-laion2B-39B-b160k-fp16
silveroxides
2024-05-13T21:06:11Z
2
0
open_clip
[ "open_clip", "pytorch", "safetensors", "clip", "zero-shot-image-classification", "arxiv:1910.04867", "arxiv:2212.07143", "license:mit", "region:us" ]
zero-shot-image-classification
2024-03-26T16:18:31Z
--- license: mit widget: - src: >- https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Model Card for CLIP ViT-bigG/14 - LAION-2B # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) 7. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description A CLIP ViT-bigG/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip). Model training done by Mitchell Wortsman on the [stability.ai](https://stability.ai/) cluster. The license for this model is MIT. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). Fine-tuning was also partially done on LAION-A, a 900M subset of LAION-2B filtered with aesthetic V2 4.5+ and phash deduplicated. **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure The training procedure will soon be discussed by a blog post on laion.ai. # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. **TODO** - more detail ## Results The model achieves a 80.1 zero-shot top-1 accuracy on ImageNet-1k. An initial round of benchmarks have been performed on a wider range of datasets, and will soon be visible at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb **TODO** - create table for just this model's metrics. # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model. # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenAI CLIP paper ``` @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` OpenCLIP software ``` @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` Scaling OpenCLIP paper ``` @article{cherti2022reproducible, title={Reproducible scaling laws for contrastive language-image learning}, author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia}, journal={arXiv preprint arXiv:2212.07143}, year={2022} } ``` # How to Get Started with the Model Use the code below to get started with the model. ** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets
Angy309/swinv2-tiny-patch4-window8-256-Lego-v2-3ep
Angy309
2024-05-13T21:00:15Z
150
0
transformers
[ "transformers", "tensorboard", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-tiny-patch4-window8-256", "base_model:finetune:microsoft/swinv2-tiny-patch4-window8-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-13T20:55:46Z
--- license: apache-2.0 base_model: microsoft/swinv2-tiny-patch4-window8-256 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-tiny-patch4-window8-256-Lego-v2-3ep results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9196428571428571 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-Lego-v2-3ep This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2129 - Accuracy: 0.9196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3415 | 1.0 | 49 | 0.6599 | 0.7857 | | 0.7005 | 2.0 | 98 | 0.3036 | 0.8929 | | 0.5615 | 3.0 | 147 | 0.2129 | 0.9196 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
zfan3/Q2TMC_round2_checkpoint-13155
zfan3
2024-05-13T20:59:45Z
104
1
transformers
[ "transformers", "safetensors", "esm", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-13T20:59:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JanJacobsen/llama3_8b_instruct_f16
JanJacobsen
2024-05-13T20:59:19Z
8
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-13T20:47:47Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: meta-llama/Meta-Llama-3-8B-Instruct --- # Uploaded model - **Developed by:** JanJacobsen - **License:** apache-2.0 - **Finetuned from model :** meta-llama/Meta-Llama-3-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ehristoforu/llama3-12b-instruct
ehristoforu
2024-05-13T20:58:43Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:OpenBuddy/openbuddy-llama3-8b-v21.1-8k", "base_model:merge:OpenBuddy/openbuddy-llama3-8b-v21.1-8k", "base_model:gradientai/Llama-3-8B-Instruct-Gradient-4194k", "base_model:merge:gradientai/Llama-3-8B-Instruct-Gradient-4194k", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T20:54:45Z
--- base_model: - gradientai/Llama-3-8B-Instruct-Gradient-4194k - OpenBuddy/openbuddy-llama3-8b-v21.1-8k library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [gradientai/Llama-3-8B-Instruct-Gradient-4194k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-4194k) * [OpenBuddy/openbuddy-llama3-8b-v21.1-8k](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: gradientai/Llama-3-8B-Instruct-Gradient-4194k layer_range: [0, 24] - sources: - model: OpenBuddy/openbuddy-llama3-8b-v21.1-8k layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ```
ethankasa/phi-2-trivia
ethankasa
2024-05-13T20:57:27Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:other", "region:us" ]
null
2024-05-13T19:38:28Z
--- license: other library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: glaiveNew results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # glaiveNew This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the trivia dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
Amna100/fold_5
Amna100
2024-05-13T20:50:58Z
106
0
transformers
[ "transformers", "safetensors", "deberta", "token-classification", "generated_from_trainer", "base_model:Amna100/PreTraining-MLM", "base_model:finetune:Amna100/PreTraining-MLM", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-13T20:02:56Z
--- license: mit base_model: Amna100/PreTraining-MLM tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: fold_5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/lvieenf2) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/fgis28rc) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/9tw0vsla) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/ccjl3n87) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/geyuezlx) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/sv9tcfx8) # fold_5 This model is a fine-tuned version of [Amna100/PreTraining-MLM](https://huggingface.co/Amna100/PreTraining-MLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0105 - Precision: 0.7304 - Recall: 0.6036 - F1: 0.6610 - Accuracy: 0.9993 - Roc Auc: 0.9957 - Pr Auc: 0.9999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Roc Auc | Pr Auc | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------:|:------:| | 0.0252 | 1.0 | 632 | 0.0112 | 0.5890 | 0.5570 | 0.5726 | 0.9991 | 0.9975 | 0.9999 | | 0.0105 | 2.0 | 1264 | 0.0105 | 0.7304 | 0.6036 | 0.6610 | 0.9993 | 0.9957 | 0.9999 | | 0.0061 | 3.0 | 1896 | 0.0107 | 0.6488 | 0.7228 | 0.6838 | 0.9993 | 0.9956 | 0.9999 | | 0.0027 | 4.0 | 2528 | 0.0114 | 0.7023 | 0.6969 | 0.6996 | 0.9993 | 0.9967 | 0.9999 | | 0.0012 | 5.0 | 3160 | 0.0136 | 0.7841 | 0.6399 | 0.7047 | 0.9994 | 0.9945 | 0.9999 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
snigdhachandan/ganeet-V4
snigdhachandan
2024-05-13T20:50:45Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "WizardLMTeam/WizardMath-7B-V1.1", "microsoft/rho-math-7b-interpreter-v0.1", "meta-math/MetaMath-Mistral-7B", "base_model:WizardLMTeam/WizardMath-7B-V1.1", "base_model:merge:WizardLMTeam/WizardMath-7B-V1.1", "base_model:meta-math/MetaMath-Mistral-7B", "base_model:merge:meta-math/MetaMath-Mistral-7B", "base_model:microsoft/rho-math-7b-interpreter-v0.1", "base_model:merge:microsoft/rho-math-7b-interpreter-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T19:47:58Z
--- tags: - merge - mergekit - lazymergekit - WizardLMTeam/WizardMath-7B-V1.1 - microsoft/rho-math-7b-interpreter-v0.1 - meta-math/MetaMath-Mistral-7B base_model: - WizardLMTeam/WizardMath-7B-V1.1 - microsoft/rho-math-7b-interpreter-v0.1 - meta-math/MetaMath-Mistral-7B --- # ganeet-V4 ganeet-V4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [WizardLMTeam/WizardMath-7B-V1.1](https://huggingface.co/WizardLMTeam/WizardMath-7B-V1.1) * [microsoft/rho-math-7b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) * [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) ## 🧩 Configuration ```yaml models: - model: WizardLMTeam/WizardMath-7B-V1.1 parameters: density: 0.5 # fraction of weights in differences from the base model to retain weight: # weight gradient - filter: mlp value: 0.5 - value: 0 - model: upaya07/Arithmo2-Mistral-7B - model: microsoft/rho-math-7b-interpreter-v0.1 parameters: density: 0.5 weight: 0.5 - model: meta-math/MetaMath-Mistral-7B parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: upaya07/Arithmo2-Mistral-7B parameters: normalize: true int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "snigdhachandan/ganeet-V4" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
statking/zephyr-7b-sft-full
statking
2024-05-13T20:47:11Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T16:36:37Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/ultrachat_200k model-index: - name: zephyr-7b-sft-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-sft-full This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 0.9366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9201 | 1.0 | 1090 | 0.9366 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.1.0a0+32f93b1 - Datasets 2.19.1 - Tokenizers 0.19.1
bpalacios/Phi-3-Finetuned
bpalacios
2024-05-13T20:46:17Z
106
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "trl", "sft", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-09T14:08:46Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ```markdown import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline from peft import PeftConfig, PeftModel torch.random.manual_seed(0) config = PeftConfig.from_pretrained("bpalacios/Phi-3-Finetuned") model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-4k-instruct", device_map="auto", torch_dtype="auto", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("bpalacios/Phi-3-Finetuned") model = PeftModel.from_pretrained(model, "bpalacios/Phi-3-Finetuned") ``` ```markdown model = AutoModelForCausalLM.from_pretrained( "bpalacios/Phi-3-Finetuned", device_map="auto", torch_dtype="auto", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("bpalacios/Phi-3-Finetuned") ``` ```markdown pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.3, "do_sample": False, } ``` ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MSBATeam4/Team4_ADR_Detector
MSBATeam4
2024-05-13T20:41:44Z
108
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-13T20:40:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YCHuang2112/model
YCHuang2112
2024-05-13T20:41:18Z
1
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-13T15:11:03Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - diffusers-training base_model: runwayml/stable-diffusion-v1-5 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-YCHuang2112/model These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below. prompt: red circle with blue background ![images_0)](./images_0.png) prompt: cyan circle with brown floral background ![images_1)](./images_1.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
stablediffusionapi/epicrealism-xl-v7
stablediffusionapi
2024-05-13T20:38:54Z
30
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-13T20:35:43Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # API Inference ![generated from modelslab.com](https://cdn2.stablediffusionapi.com/generations/bf190b5a-fe19-437c-ba05-82f29cb1f7ad-0.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "epicrealism-xl-v7" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/epicrealism-xl-v7) Model link: [View model](https://modelslab.com/models/epicrealism-xl-v7) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "epicrealism-xl-v7", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
aiqtech/Meta-Llama-3-70B-Instruct-Q4_K_M-GGUF
aiqtech
2024-05-13T20:38:27Z
17
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-13T20:32:03Z
--- language: - en license: llama3 tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo pipeline_tag: text-generation extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\ \ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\ \ 3\" means the foundational large language models and software and algorithms,\ \ including machine-learning model code, trained model weights, inference-enabling\ \ code, training-enabling code, fine-tuning enabling code and other elements of\ \ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\ \"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\ \ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\ \ a non-exclusive, worldwide, non-transferable and royalty-free limited license\ \ under Meta’s intellectual property or other rights owned by Meta embodied in the\ \ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\ \ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\ \ If you distribute or make available the Llama Materials (or any derivative works\ \ thereof), or a product or service that uses any of them, including another AI\ \ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\ \ and (B) prominently display “Built with Meta Llama 3” on a related website, user\ \ interface, blogpost, about page, or product documentation. If you use the Llama\ \ Materials to create, train, fine tune, or otherwise improve an AI model, which\ \ is distributed or made available, you shall also include “Llama 3” at the beginning\ \ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\ \ works thereof, from a Licensee as part of an integrated end user product, then\ \ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\ \ copies of the Llama Materials that you distribute the following attribution notice\ \ within a “Notice” text file distributed as a part of such copies: “Meta Llama\ \ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\nv. You will not use the Llama Materials or any output or\ \ results of the Llama Materials to improve any other large language model (excluding\ \ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\ \ on the Meta Llama 3 version release date, the monthly active users of the products\ \ or services made available by or for Licensee, or Licensee’s affiliates, is greater\ \ than 700 million monthly active users in the preceding calendar month, you must\ \ request a license from Meta, which Meta may grant to you in its sole discretion,\ \ and you are not authorized to exercise any of the rights under this Agreement\ \ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\ \ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\ \ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\ \ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\ \ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\ \ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\ \ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\ \ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\ \ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\ \ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\ \ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\ \ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\ \ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\ \ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\ \ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\ \ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\ \ Meta, with respect to any derivative works and modifications of the Llama Materials\ \ that are made by you, as between you and Meta, you are and will be the owner of\ \ such derivative works and modifications.\nc. If you institute litigation or other\ \ proceedings against Meta or any entity (including a cross-claim or counterclaim\ \ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\ \ or any portion of any of the foregoing, constitutes infringement of intellectual\ \ property or other rights owned or licensable by you, then any licenses granted\ \ to you under this Agreement shall terminate as of the date such litigation or\ \ claim is filed or instituted. You will indemnify and hold harmless Meta from and\ \ against any claim by any third party arising out of or related to your use or\ \ distribution of the Llama Materials.\n6. Term and Termination. The term of this\ \ Agreement will commence upon your acceptance of this Agreement or access to the\ \ Llama Materials and will continue in full force and effect until terminated in\ \ accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\ \ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\ \ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\ \ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\ \ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 4.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\ \ human-generated\n 6. Generating or facilitating false online engagement, including\ \ fake reviews and other means of fake online engagement\n4. Fail to appropriately\ \ disclose to end users any known dangers of your AI system\nPlease report any violation\ \ of this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit widget: - example_title: Winter holidays messages: - role: system content: You are a helpful and honest assistant. Please, respond concisely and truthfully. - role: user content: Can you recommend a good destination for Winter holidays? - example_title: Programming assistant messages: - role: system content: You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully. - role: user content: Write a function that computes the nth fibonacci number. inference: parameters: max_new_tokens: 300 stop: - <|end_of_text|> - <|eot_id|> --- # aiqtech/Meta-Llama-3-70B-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-70B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo aiqtech/Meta-Llama-3-70B-Instruct-Q4_K_M-GGUF --model meta-llama-3-70b-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo aiqtech/Meta-Llama-3-70B-Instruct-Q4_K_M-GGUF --model meta-llama-3-70b-instruct.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m meta-llama-3-70b-instruct.Q4_K_M.gguf -n 128 ```
austindavis/gpt2-lichess-uci-201302b
austindavis
2024-05-13T20:32:09Z
150
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:austindavis/gpt2-lichess-uci-2016-01_11", "base_model:finetune:austindavis/gpt2-lichess-uci-2016-01_11", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T19:46:45Z
--- tags: - generated_from_trainer base_model: austindavis/gpt2-lichess-uci-2016-01_11 model-index: - name: gpt2-lichess-uci-201302b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-lichess-uci-201302b This model is a fine-tuned version of [austindavis/gpt2-lichess-uci-2016-01_11](https://huggingface.co/austindavis/gpt2-lichess-uci-2016-01_11) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0454 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 20 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
quangtqv/bi_encoder_tool_learning_14_5_2024
quangtqv
2024-05-13T20:25:59Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-13T20:25:35Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # quangtqv/bi_encoder_tool_learning_14_5_2024 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('quangtqv/bi_encoder_tool_learning_14_5_2024') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=quangtqv/bi_encoder_tool_learning_14_5_2024) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
joyson/llama3_ted_talks
joyson
2024-05-13T20:17:28Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-13T20:17:17Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** joyson - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gabbybilka/ppo-Huggy
gabbybilka
2024-05-13T20:16:50Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-05-13T20:16:11Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: gabbybilka/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_3
ShenaoZhang
2024-05-13T20:14:13Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_2", "base_model:finetune:ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T19:26:32Z
--- license: mit base_model: ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_2 tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.001_zephyr_5551_4iters_bs256_iter_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_zephyr_5551_4iters_bs256_iter_3 This model is a fine-tuned version of [ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_2](https://huggingface.co/ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_2) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
snigdhachandan/ganeet-V5
snigdhachandan
2024-05-13T20:11:06Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "WizardLMTeam/WizardMath-7B-V1.1", "microsoft/rho-math-7b-interpreter-v0.1", "base_model:WizardLMTeam/WizardMath-7B-V1.1", "base_model:merge:WizardLMTeam/WizardMath-7B-V1.1", "base_model:microsoft/rho-math-7b-interpreter-v0.1", "base_model:merge:microsoft/rho-math-7b-interpreter-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T19:57:06Z
--- tags: - merge - mergekit - lazymergekit - WizardLMTeam/WizardMath-7B-V1.1 - microsoft/rho-math-7b-interpreter-v0.1 base_model: - WizardLMTeam/WizardMath-7B-V1.1 - microsoft/rho-math-7b-interpreter-v0.1 --- # ganeet-V5 ganeet-V5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [WizardLMTeam/WizardMath-7B-V1.1](https://huggingface.co/WizardLMTeam/WizardMath-7B-V1.1) * [microsoft/rho-math-7b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) ## 🧩 Configuration ```yaml models: - model: WizardLMTeam/WizardMath-7B-V1.1 parameters: density: 0.5 # fraction of weights in differences from the base model to retain weight: # weight gradient - filter: mlp value: 0.5 - value: 0 - model: upaya07/Arithmo2-Mistral-7B - model: microsoft/rho-math-7b-interpreter-v0.1 parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: upaya07/Arithmo2-Mistral-7B parameters: normalize: true int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "snigdhachandan/ganeet-V5" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ulasfiliz954/rl_course_vizdoom_health_gathering_supreme
ulasfiliz954
2024-05-13T20:03:10Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-13T19:58:08Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.11 +/- 4.10 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r ulasfiliz954/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
kaith/yor_asr_awr
kaith
2024-05-13T19:59:06Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-large-v3", "base_model:adapter:openai/whisper-large-v3", "region:us" ]
null
2024-05-13T19:54:26Z
--- library_name: peft base_model: openai/whisper-large-v3 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
FortiaVila/fine_tune_test
FortiaVila
2024-05-13T19:57:44Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-13T19:57:30Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # FortiaVila/fine_tune_test This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('FortiaVila/fine_tune_test') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=FortiaVila/fine_tune_test) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 51 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
kanaluvu/bloomz-1b1-prompted-finetuned
kanaluvu
2024-05-13T19:56:51Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-13T19:56:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dannys160/dqn-SpaceInvadersNoFrameskip-v4
dannys160
2024-05-13T19:50:54Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-13T19:50:26Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 211.50 +/- 2.29 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dannys160 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dannys160 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dannys160 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Akazi/bert-base-multilingual-uncased
Akazi
2024-05-13T19:47:06Z
153
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-13T19:22:26Z
--- license: mit --- # Multilingual Sentiment Analysis Model This repository contains a fine-tuned sentiment analysis model based on the BERT base multilingual model. The model is trained on a dataset with texts labeled as 'negative', 'neutral', or 'positive'. ## Model Description The model is built using the Hugging Face Transformers library and the 'bert-base-multilingual-uncased' pre-trained model. It is fine-tuned on a sentiment analysis task, allowing it to classify input texts into one of three sentiment categories: negative, neutral, or positive. ## Usage To use this model, you can load it from the Hugging Face Model Hub using the following code: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline model_name = "Akazi/bert-base-multilingual-uncased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) texts = ["This is a great product!", "I'm not sure about this."] predictions = classifier(texts) print(predictions) ``` This code will load the model and tokenizer from the Hugging Face Model Hub, create a text classification pipeline, and use it to classify the provided texts. The output will be a list of dictionaries, where each dictionary contains the predicted label and score for the corresponding input text. ## Model Performance The model was evaluated on a held-out test dataset, and the following metrics were obtained: - Accuracy: 0.7061 - Precision: 0.7038 - Recall: 0.7040 - F1-score: 0.7038 Please note that the performance may vary depending on the input data and domain. ## Training Data The model was trained on a dataset consisting of text samples labeled as 'negative', 'neutral', or 'positive'. The dataset was preprocessed by mapping the textual labels to numeric values (0 for 'negative', 1 for 'neutral', and 2 for 'positive'). ## License This model is licensed under the [MIT License](https://opensource.org/licenses/MIT). ## Credits This model was developed by Abdullah Kazi using the Hugging Face Transformers library and the BERT base multilingual model.
dbalasub/check-qa
dbalasub
2024-05-13T19:42:39Z
105
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-13T19:42:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
snigdhachandan/ganeet-V3
snigdhachandan
2024-05-13T19:35:24Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "microsoft/rho-math-7b-interpreter-v0.1", "WizardLMTeam/WizardMath-7B-V1.1", "base_model:WizardLMTeam/WizardMath-7B-V1.1", "base_model:merge:WizardLMTeam/WizardMath-7B-V1.1", "base_model:microsoft/rho-math-7b-interpreter-v0.1", "base_model:merge:microsoft/rho-math-7b-interpreter-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T19:01:27Z
--- tags: - merge - mergekit - lazymergekit - microsoft/rho-math-7b-interpreter-v0.1 - WizardLMTeam/WizardMath-7B-V1.1 base_model: - microsoft/rho-math-7b-interpreter-v0.1 - WizardLMTeam/WizardMath-7B-V1.1 --- # ganeet-V3 ganeet-V3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [microsoft/rho-math-7b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) * [WizardLMTeam/WizardMath-7B-V1.1](https://huggingface.co/WizardLMTeam/WizardMath-7B-V1.1) ## 🧩 Configuration ```yaml slices: - sources: - model: microsoft/rho-math-7b-interpreter-v0.1 layer_range: [0, 32] - model: WizardLMTeam/WizardMath-7B-V1.1 layer_range: [0, 32] merge_method: slerp base_model: microsoft/rho-math-7b-interpreter-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.4 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "snigdhachandan/ganeet-V3" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Sristi222/corgy_dog_LoRA
Sristi222
2024-05-13T19:32:07Z
0
1
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-03T21:42:17Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of TOK dog widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Sristi222/corgy_dog_LoRA <Gallery /> ## Model description These are Sristi222/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](Sristi222/corgy_dog_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Mag0g/Ezekiel26_12
Mag0g
2024-05-13T19:26:36Z
128
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T19:25:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
glazzova/body_complexion
glazzova
2024-05-13T19:16:12Z
290
4
transformers
[ "transformers", "pytorch", "resnet", "image-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-13T18:52:14Z
--- license: apache-2.0 library_name: transformers pipeline_tag: image-classification --- # body_complexion This is a fine-tuned [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) model. Dataset with men of different bode complexion was used for fine-tuning. ### Intended Use: The model is intended for image classification tasks specifically related to men's body types. It is designed to classify images into four categories based on body complexion: skinny, ordinary, overweight, and very muscular. The model can be utilized in applications such as: - Health and fitness platforms for body type analysis - Clothing recommendation systems tailored for different body types - Visual content moderation systems to filter images based on body type ### Launch ```python import torch from PIL import Image from transformers import ResNetForImageClassification, AutoImageProcessor processor = AutoImageProcessor.from_pretrained('glazzova/body_complexion') model = ResNetForImageClassification.from_pretrained('glazzova/body_complexion') image = Image.open('your_pic.jpeg') inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 4 classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label])
ws11yrin/dqn-SpaceInvadersNoFrameskip-v4
ws11yrin
2024-05-13T19:13:45Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-13T19:05:24Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 832.00 +/- 383.77 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ws11yrin -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ws11yrin -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ws11yrin ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
fahad0071/Therapist-2
fahad0071
2024-05-13T19:11:38Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
2024-05-13T19:02:18Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Llama-2-7b-chat-hf datasets: - generator model-index: - name: Therapist-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Therapist-2 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.2547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.188 | 1.0 | 272 | 1.2547 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
kyl23/hw3_RTE_lora_1e-4
kyl23
2024-05-13T19:10:05Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-13T19:09:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sonatafyai/scibert-finetuned_ADEs_SonatafyAI
Sonatafyai
2024-05-13T19:07:26Z
222
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:jsylee/scibert_scivocab_uncased-finetuned-ner", "base_model:finetune:jsylee/scibert_scivocab_uncased-finetuned-ner", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-13T19:00:12Z
--- base_model: jsylee/scibert_scivocab_uncased-finetuned-ner tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: scibert-finetuned_ADEs_SonatafyAI results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scibert-finetuned_ADEs_SonatafyAI This model is a fine-tuned version of [jsylee/scibert_scivocab_uncased-finetuned-ner](https://huggingface.co/jsylee/scibert_scivocab_uncased-finetuned-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2004 - Precision: 0.6454 - Recall: 0.6962 - F1: 0.6698 - Accuracy: 0.9095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2918 | 1.0 | 640 | 0.2240 | 0.6095 | 0.7148 | 0.6579 | 0.9029 | | 0.2305 | 2.0 | 1280 | 0.2064 | 0.6354 | 0.6896 | 0.6614 | 0.9079 | | 0.2223 | 3.0 | 1920 | 0.2031 | 0.636 | 0.6951 | 0.6642 | 0.9082 | | 0.2145 | 4.0 | 2560 | 0.2010 | 0.6419 | 0.6973 | 0.6684 | 0.9089 | | 0.2081 | 5.0 | 3200 | 0.2004 | 0.6454 | 0.6962 | 0.6698 | 0.9095 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
pristinawang/1715626778
pristinawang
2024-05-13T19:05:18Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-13T19:03:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gruhit-patel/q-FrozenLake-v1-4x4-noSlippery
gruhit-patel
2024-05-13T19:04:52Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-13T19:04:50Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="gruhit-patel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
danyoung/leonardo
danyoung
2024-05-13T18:54:40Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T18:45:58Z
--- license: apache-2.0 ---
Amo/so-vits-svc-4.0_GA
Amo
2024-05-13T18:53:45Z
0
4
null
[ "audio-to-audio", "region:us" ]
audio-to-audio
2023-03-07T12:28:07Z
--- pipeline_tag: audio-to-audio --- Models creaded with and in the use in the Vul So-Vits-SVC 4.0 UI. **_List of models:** **----Pony:** CakeMrs_70<br>Derpy_100000<br>Spike_45000<br>TreeHugger_69k<br>ddm_DaringDo_100k (+ Lighting Dust and Moondancer)<br>djPon3_Dash_Mix_85000<br>Tirek_100k <br>DJpon3_V3_120k<br>OctaviaBrit (trained on 15ai tts audio) **----Non-Pony:** DRD_60000<br>Dagoth_Ur_50k<br>Dagoth_Ur_80k<br>Glados_50k<br>Gwenpool_50000 (multilingual)<br>NamelessHero_eng<br>Saul_Goodman_80000 TF2_SaxtonHale_100k<br>TF2_demoman_75k<br>TF2_engineer_60k<br>TF2_heavy_100k<br>TF2_medic_100k<br>TF2_scout_60k<br>TF2_sniper_60k<br>TF2_soldier_60k<br>TF2_spy_60k g1_Diego_PL_60000<br>Boss_MGS_80k<br>Gaunter ODimm<br>B1_BattleDroid<br>Frank Sinatra **_List of datasets:** TF2_SaxtonHale_100k, TF2_demoman_75k, TF2_engineer_60k, TF2_heavy_100k, TF2_medic_100k, TF2_scout_60k, TF2_sniper_60k, TF2_soldier_60k, TF2_spy_60k DRD_MLP<br>Daring_Do_multiple<br>DerpyWavExpanded<br>djPon3_Dash_Mix_(audio_dataset) (OLD, im working on making update on this)<br>Dagoth Ur<br>Boss_MGS<br>Gaunter ODimm <br><br>OctaviaBrit (15ai tts audio) B1_BattleDroid<br>DJpon3_V3 (NEW dataset, better than old one but still not perfect)<br>saul_goodman<br>Frank Sinatra<br>StarTrekComputer
Angy309/swin-tiny-patch4-window7-224-pueba1
Angy309
2024-05-13T18:47:06Z
217
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-13T18:33:06Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-pueba1 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.7948717948717948 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-pueba1 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4784 - Accuracy: 0.7949 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.9091 | 5 | 0.6238 | 0.6410 | | 0.653 | 2.0 | 11 | 0.5156 | 0.7692 | | 0.653 | 2.7273 | 15 | 0.4784 | 0.7949 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
mjrdbds/llama3-4b-classifierunsloth-130524
mjrdbds
2024-05-13T18:45:26Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T16:58:37Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** mjrdbds - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
veronica-girolimetti/mistral_qt_finetuned_LoRA_04
veronica-girolimetti
2024-05-13T18:39:43Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-13T18:37:24Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** veronica-girolimetti - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Ziyu25/dqn-SpaceInvadersNoFrameskip-v4
Ziyu25
2024-05-13T18:37:59Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-13T18:37:24Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 149.00 +/- 123.18 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ziyu25 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ziyu25 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Ziyu25 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
vwxyzjn/ppo_zephyr_vllm_warmup_1e-6_larger_bs
vwxyzjn
2024-05-13T18:35:50Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T18:34:48Z
--- license: mit base_model: HuggingFaceH4/mistral-7b-sft-beta tags: - generated_from_trainer model-index: - name: ppo_zephyr_vllm_warmup_1e-6_larger_bs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ppo_zephyr_vllm_warmup_1e-6_larger_bs This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 7 - gradient_accumulation_steps: 64 - total_train_batch_size: 448 - total_eval_batch_size: 56 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 3.0 ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
blueprintninja/dolphin-2.9.1-llama-3-8b-llamafile-nonAVX
blueprintninja
2024-05-13T18:35:37Z
4
1
null
[ "llamafile", "GGUF", "base_model:crusoeai/dolphin-2.9.1-llama-3-8b-GGUF", "base_model:finetune:crusoeai/dolphin-2.9.1-llama-3-8b-GGUF", "region:us" ]
null
2024-05-13T18:33:32Z
--- tags: - llamafile - GGUF base_model: crusoeai/dolphin-2.9.1-llama-3-8b-GGUF --- ## dolphin-2.9.1-llama-3-8b-llamafile-nonAVX llamafile lets you distribute and run LLMs with a single file. [announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/) #### Downloads - [dolphin-2.9.1-llama-3-8b.Q4_0.llamafile](https://huggingface.co/blueprintninja/dolphin-2.9.1-llama-3-8b-llamafile-nonAVX/resolve/main/dolphin-2.9.1-llama-3-8b.Q4_0.llamafile) This repository was created using the [llamafile-builder](https://github.com/rabilrbl/llamafile-builder)
nem012/gemma2B-MHC
nem012
2024-05-13T18:34:13Z
5
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-30T16:15:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sally9805/bert-base-uncased-finetuned-news-2003-2007
sally9805
2024-05-13T18:31:30Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-13T05:22:39Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: bert-base-uncased model-index: - name: bert-base-uncased-finetuned-news-2003-2007 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-news-2003-2007 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3155 | 1.0 | 22331 | 3.0932 | | 3.2358 | 2.0 | 44662 | 3.0404 | | 3.2124 | 3.0 | 66993 | 3.0210 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
rushdiodeh/Multi-dialect-Arabicrush
rushdiodeh
2024-05-13T18:24:55Z
0
0
null
[ "text-classification", "license:apache-2.0", "region:us" ]
text-classification
2024-04-13T20:27:24Z
--- license: apache-2.0 metrics: - accuracy pipeline_tag: text-classification --- # Arabic-Dialects-Identification-Model here's a complete Python code to perform the Arabic dialects identification using MultinomialNB and Random Forest classifiers and evaluate the model using various performance metrics: This creates a dataframe 'comparison_df' that has three columns: 'Actual', 'Multinomial NB', and Random ,Forest'. The Actual column contains the true labels for the testing data, while the Multinomial NB and Random Forest columns contain the predicted labels for the testing data using the corresponding classifier. Finally, the comparison dataframe is saved to an Excel file called arabic_dialects_comparison.xlsx using the to_excel() method with index=False argument to exclude the row index from the Excel file.
Litzy619/PHI30512HMAB1
Litzy619
2024-05-13T18:23:24Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-05-13T17:26:46Z
--- license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - generated_from_trainer model-index: - name: PHI30512HMAB1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PHI30512HMAB1 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.4839 | 0.09 | 10 | 5.4238 | | 5.3155 | 0.18 | 20 | 4.6879 | | 3.6592 | 0.27 | 30 | 1.9615 | | 0.9316 | 0.36 | 40 | 0.2201 | | 0.1779 | 0.45 | 50 | 0.1492 | | 0.1485 | 0.54 | 60 | 0.1212 | | 0.108 | 0.63 | 70 | 0.0889 | | 0.0902 | 0.73 | 80 | 0.0788 | | 0.0657 | 0.82 | 90 | 0.0730 | | 0.0695 | 0.91 | 100 | 0.0669 | | 0.0716 | 1.0 | 110 | 0.0673 | | 0.0557 | 1.09 | 120 | 0.0651 | | 0.0525 | 1.18 | 130 | 0.0684 | | 0.0614 | 1.27 | 140 | 0.0674 | | 0.0523 | 1.36 | 150 | 0.0651 | | 0.0572 | 1.45 | 160 | 0.0622 | | 0.0563 | 1.54 | 170 | 0.0620 | | 0.0522 | 1.63 | 180 | 0.0622 | | 0.0544 | 1.72 | 190 | 0.0619 | | 0.0576 | 1.81 | 200 | 0.0590 | | 0.045 | 1.9 | 210 | 0.0609 | | 0.053 | 1.99 | 220 | 0.0611 | | 0.0353 | 2.08 | 230 | 0.0628 | | 0.0384 | 2.18 | 240 | 0.0687 | | 0.0308 | 2.27 | 250 | 0.0724 | | 0.0305 | 2.36 | 260 | 0.0746 | | 0.0334 | 2.45 | 270 | 0.0742 | | 0.0278 | 2.54 | 280 | 0.0742 | | 0.0307 | 2.63 | 290 | 0.0737 | | 0.0325 | 2.72 | 300 | 0.0732 | | 0.037 | 2.81 | 310 | 0.0725 | | 0.0331 | 2.9 | 320 | 0.0723 | | 0.0295 | 2.99 | 330 | 0.0723 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
yashpalsharma/sft-gemma-2b-it-finetuned-ys
yashpalsharma
2024-05-13T18:22:29Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:google/gemma-2b-it", "base_model:adapter:google/gemma-2b-it", "license:gemma", "region:us" ]
null
2024-05-13T18:20:01Z
--- license: gemma library_name: peft tags: - trl - sft - generated_from_trainer base_model: google/gemma-2b-it model-index: - name: sft-gemma-2b-it-finetuned-ys results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft-gemma-2b-it-finetuned-ys This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.38.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.15.2
adarshheg/sft_finetuned_adapters_olib
adarshheg
2024-05-13T18:19:30Z
0
0
peft
[ "peft", "region:us" ]
null
2024-05-13T18:19:29Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0
DavidVarona/review
DavidVarona
2024-05-13T18:19:07Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-05-13T18:18:51Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
kanaluvu/bloomz-560m-prompted-finetuned
kanaluvu
2024-05-13T18:18:53Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-13T18:18:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
WbjuSrceu/llama3-8b-LoRA
WbjuSrceu
2024-05-13T18:16:00Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-13T18:15:41Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** WbjuSrceu - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Mag0g/Ezekiel26_11
Mag0g
2024-05-13T18:13:00Z
127
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T18:11:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
npvinHnivqn/rag_healthcare_vietnamese_vinallama_xattn_extractor_v08
npvinHnivqn
2024-05-13T18:12:35Z
49
0
transformers
[ "transformers", "safetensors", "blip_2_qformer", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-13T18:12:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MariamElnour/llama-2-7b-restrec
MariamElnour
2024-05-13T18:11:21Z
5
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T23:18:20Z
# Model Card for Model ID this model is a demo for our Restaurant recommendation system ## Model Details fine-tuned LLAMA2-7B-chat
RichardErkhov/TeeZee_-_Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1-4bits
RichardErkhov
2024-05-13T18:11:19Z
77
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-13T18:04:53Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1 - bnb 4bits - Model creator: https://huggingface.co/TeeZee/ - Original model: https://huggingface.co/TeeZee/Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1/ Original model description: --- license: cc-by-nc-4.0 --- ### TeeZee/Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1 ### Precise recipe used by Upstage to create [SOLAR](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) was applied to https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1 *(just merge, no finetuning) ### Results ### - model is still coherent in Polish language, even without finetuning after merge - instruct mode works in ooba without issues - model is censored and aligned - seems that this model scores highest amongst all versions of original Bielik models, further finetunig should improve results even more. ![imgage/png](https://huggingface.co/TeeZee/Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1/resolve/main/OpenLLMLeaderboard_results.png) - on dedicated to Polish speaking LLM leaderboards, its 2nd, just behind instruct version used for this merge, and thats to be expected when applying DUS merge - very small quality loss. [Polish LLMs leaderboards](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) - overall it seems like a good base for further finetunig in Polish language.
jama2002/clasificador-muchocine
jama2002
2024-05-13T18:10:03Z
104
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "classification", "generated_from_trainer", "base_model:mrm8488/electricidad-base-discriminator", "base_model:finetune:mrm8488/electricidad-base-discriminator", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-13T18:09:45Z
--- base_model: mrm8488/electricidad-base-discriminator tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4140 - Accuracy: 0.4477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.3447 | 0.3948 | | 1.4031 | 2.0 | 776 | 1.2922 | 0.4219 | | 1.0011 | 3.0 | 1164 | 1.4140 | 0.4477 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
ludocomito/Minerva-MoE-3x3B
ludocomito
2024-05-13T18:00:08Z
2,802
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "sapienzanlp/Minerva-3B-base-v1.0", "DeepMount00/Minerva-3B-base-RAG", "FairMind/Minerva-3B-Instruct-v1.0", "base_model:DeepMount00/Minerva-3B-base-RAG", "base_model:merge:DeepMount00/Minerva-3B-base-RAG", "base_model:FairMind/Minerva-3B-Instruct-v1.0", "base_model:merge:FairMind/Minerva-3B-Instruct-v1.0", "base_model:sapienzanlp/Minerva-3B-base-v1.0", "base_model:merge:sapienzanlp/Minerva-3B-base-v1.0", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T17:46:02Z
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - sapienzanlp/Minerva-3B-base-v1.0 - DeepMount00/Minerva-3B-base-RAG - FairMind/Minerva-3B-Instruct-v1.0 base_model: - sapienzanlp/Minerva-3B-base-v1.0 - DeepMount00/Minerva-3B-base-RAG - FairMind/Minerva-3B-Instruct-v1.0 --- # Minerva-MoE-3x3B Minerva-MoE-3x3B is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [sapienzanlp/Minerva-3B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0) * [DeepMount00/Minerva-3B-base-RAG](https://huggingface.co/DeepMount00/Minerva-3B-base-RAG) * [FairMind/Minerva-3B-Instruct-v1.0](https://huggingface.co/FairMind/Minerva-3B-Instruct-v1.0) ## 🧩 Configuration ```yaml base_model: sapienzanlp/Minerva-3B-base-v1.0 experts: - source_model: sapienzanlp/Minerva-3B-base-v1.0 positive_prompts: - "ciao" - "chat" - "parlare" - source_model: DeepMount00/Minerva-3B-base-RAG positive_prompts: - "rispondi a domande" - "cosa è" - "chi è" - "dove è" - "come si" - "spiegami" - "definisci" - source_model: FairMind/Minerva-3B-Instruct-v1.0 positive_prompts: - "istruzione" - "input" - "risposta" - "scrivi" - "sequenza" - "istruzioni" dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "ludocomito/Minerva-MoE-3x3B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
RichardErkhov/Qwen_-_Qwen-72B-gguf
RichardErkhov
2024-05-13T17:57:40Z
18
0
null
[ "gguf", "arxiv:2309.16609", "endpoints_compatible", "region:us" ]
null
2024-05-12T23:01:02Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen-72B - GGUF - Model creator: https://huggingface.co/Qwen/ - Original model: https://huggingface.co/Qwen/Qwen-72B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Qwen-72B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/blob/main/Qwen-72B.Q2_K.gguf) | Q2_K | 24.71GB | | [Qwen-72B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/blob/main/Qwen-72B.IQ3_XS.gguf) | IQ3_XS | 28.34GB | | [Qwen-72B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/blob/main/Qwen-72B.IQ3_S.gguf) | IQ3_S | 29.4GB | | [Qwen-72B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/blob/main/Qwen-72B.Q3_K_S.gguf) | Q3_K_S | 29.4GB | | [Qwen-72B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/blob/main/Qwen-72B.IQ3_M.gguf) | IQ3_M | 32.3GB | | [Qwen-72B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/blob/main/Qwen-72B.Q3_K.gguf) | Q3_K | 34.16GB | | [Qwen-72B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/blob/main/Qwen-72B.Q3_K_M.gguf) | Q3_K_M | 34.16GB | | [Qwen-72B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/blob/main/Qwen-72B.Q3_K_L.gguf) | Q3_K_L | 36.55GB | | [Qwen-72B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/blob/main/Qwen-72B.IQ4_XS.gguf) | IQ4_XS | 36.41GB | | [Qwen-72B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q4_0 | 38.18GB | | [Qwen-72B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | IQ4_NL | 38.42GB | | [Qwen-72B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q4_K_S | 38.42GB | | [Qwen-72B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q4_K | 41.99GB | | [Qwen-72B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q4_K_M | 41.99GB | | [Qwen-72B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q4_1 | 42.32GB | | [Qwen-72B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q5_0 | 46.45GB | | [Qwen-72B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q5_K_S | 46.45GB | | [Qwen-72B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q5_K | 49.44GB | | [Qwen-72B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q5_K_M | 49.44GB | | [Qwen-72B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q5_1 | 50.59GB | | [Qwen-72B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q6_K | 55.24GB | | [Qwen-72B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen-72B-gguf/tree/main/) | Q8_0 | 71.55GB | Original model description: --- language: - zh - en tags: - qwen pipeline_tag: text-generation inference: false license: other license_name: tongyi-qianwen-license-agreement license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT --- # Qwen-72B <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/> <p> <br> <p align="center"> 🤗 <a href="https://huggingface.co/Qwen">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/qwen">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2309.16609">Paper</a> &nbsp&nbsp | &nbsp&nbsp🖥️ <a href="https://modelscope.cn/studios/qwen/Qwen-72B-Chat-Demo/summary">Demo</a> <br> <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat (微信)</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://dashscope.aliyun.com">API</a> </p> <br> ## 介绍 (Introduction) **通义千问-72B**(**Qwen-72B**)是阿里云研发的通义千问大模型系列的720亿参数规模的模型。Qwen-72B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-72B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-72B-Chat。本仓库为Qwen-72B的仓库。 通义千问-72B(Qwen-72B)主要有以下特点: 1. **大规模高质量训练语料**:使用超过3万亿tokens的数据进行预训练,包含高质量中、英、多语言、代码、数学等数据,涵盖通用及专业领域的训练语料。通过大量对比实验对预训练语料分布进行了优化。 2. **强大的性能**:Qwen-72B在多个中英文下游评测任务上(涵盖常识推理、代码、数学、翻译等),效果显著超越现有的开源模型。具体评测结果请详见下文。 3. **覆盖更全面的词表**:相比目前以中英词表为主的开源模型,Qwen-72B使用了约15万大小的词表。该词表对多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强和扩展。 4. **较长的上下文支持**:Qwen-72B支持32k的上下文长度。 如果您想了解更多关于通义千问72B开源模型的细节,我们建议您参阅[GitHub代码库](https://github.com/QwenLM/Qwen)。 **Qwen-72B** is the 72B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-72B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-72B, we release Qwen-72B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. This repository is the one for Qwen-72B. The features of Qwen-72B include: 1. **Large-scale high-quality training corpora**: It is pretrained on over 3 trillion tokens, including Chinese, English, multilingual texts, code, and mathematics, covering general and professional fields. The distribution of the pre-training corpus has been optimized through a large number of ablation experiments. 2. **Competitive performance**: It significantly surpasses existing open-source models on multiple Chinese and English downstream evaluation tasks (including commonsense, reasoning, code, mathematics, etc.). See below for specific evaluation results. 3. **More comprehensive vocabulary coverage**: Compared with other open-source models based on Chinese and English vocabularies, Qwen-72B uses a vocabulary of over 150K tokens. This vocabulary is more friendly to multiple languages, enabling users to directly further enhance the capability for certain languages without expanding the vocabulary. 4. **Longer context support**: Qwen-72B supports 32k context length. For more details about the open-source model of Qwen-72B, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository. <br> ## 要求(Requirements) * python 3.8及以上版本 * pytorch 1.12及以上版本,推荐2.0及以上版本 * 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项) * **运行BF16或FP16模型需要多卡至少144GB显存(例如2xA100-80G或5xV100-32G);运行Int4模型至少需要48GB显存(例如1xA100-80G或2xV100-32G)。** * python 3.8 and above * pytorch 1.12 and above, 2.0 and above are recommended * CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) **To run Qwen-72B-Chat in bf16/fp16, at least 144GB GPU memory is required (e.g., 2xA100-80G or 5xV100-32G). To run it in int4, at least 48GB GPU memory is requred (e.g., 1xA100-80G or 2xV100-32G).** <br> ## 依赖项 (Dependency) 运行Qwen-72B,请确保满足上述要求,再执行以下pip命令安装依赖库 To run Qwen-72B, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries. ```bash pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed ``` 另外,推荐安装`flash-attention`库(**当前已支持flash attention 2**),以实现更高的效率和更低的显存占用。 In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage. ```bash git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # 下方安装可选,安装可能比较缓慢。 # Below are optional. Installing them might be slow. # pip install csrc/layer_norm # 如果你的flash-attn版本高于2.1.1,下方不需要安装。 # If the version of flash-attn is higher than 2.1.1, the following is not needed. # pip install csrc/rotary ``` <br> ## 快速使用(Quickstart) 您可以通过以下代码轻松调用: You can easily call the model with the following code: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig # Note: The default behavior now has injection attack prevention off. tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-72B", trust_remote_code=True) # use bf16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-72B", device_map="auto", trust_remote_code=True, bf16=True).eval() # use fp16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-72B", device_map="auto", trust_remote_code=True, fp16=True).eval() # use cpu only # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-72B", device_map="cpu", trust_remote_code=True).eval() # use auto mode, automatically select precision based on the device. model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-72B", device_map="auto", trust_remote_code=True).eval() # Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this. # model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-72B", trust_remote_code=True) inputs = tokenizer('蒙古国的首都是乌兰巴托(Ulaanbaatar)\n冰岛的首都是雷克雅未克(Reykjavik)\n埃塞俄比亚的首都是', return_tensors='pt') inputs = inputs.to(model.device) pred = model.generate(**inputs) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) # 蒙古国的首都是乌兰巴托(Ulaanbaatar)\n冰岛的首都是雷克雅未克(Reykjavik)\n埃塞俄比亚的首都是亚的斯亚贝巴(Addis Ababa)... ``` 关于更多的使用说明,请参考我们的[GitHub repo](https://github.com/QwenLM/Qwen)获取更多信息。 For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information. <br> ## Tokenizer > 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。 基于tiktoken的分词器有别于其他分词器,比如sentencepiece分词器。尤其在微调阶段,需要特别注意特殊token的使用。关于tokenizer的更多信息,以及微调时涉及的相关使用,请参阅[文档](https://github.com/QwenLM/Qwen/blob/main/tokenization_note_zh.md)。 Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen/blob/main/tokenization_note.md). <br> ## 模型细节 (Model) Qwen-72B模型规模基本情况如下所示: The details of the model architecture of Qwen-72B are listed as follows: | Hyperparameter | Value | |:----------------|:-------| | n_layers | 80 | | n_heads | 64 | | d_model | 8192 | | vocab size | 151851 | | sequence length | 32768 | 在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法, 即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。 在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-72B使用了超过15万token大小的词表。 该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。 词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。 我们从部分语种各随机抽取100万个文档语料,以对比不同模型的编码压缩率(以支持100语种的XLM-R为基准值1,越低越好),具体性能见图。 可以看到Qwen-72B在保持中英代码高效解码的前提下,对部分使用人群较多的语种(泰语th、希伯来语he、阿拉伯语ar、韩语ko、越南语vi、日语ja、土耳其语tr、印尼语id、波兰语pl、俄语ru、荷兰语nl、葡萄牙语pt、意大利语it、德语de、西班牙语es、法语fr等)上也实现了较高的压缩率,使得模型在这些语种上也具备较强的可扩展性和较高的训练和推理效率。 在预训练数据方面,Qwen-72B模型一方面利用了部分开源通用语料, 另一方面也积累了海量全网语料以及高质量文本内容,去重及过滤后的语料超过3T tokens。 囊括全网文本、百科、书籍、代码、数学及各个领域垂类。 <p align="center"> <img src="assets/tokenizer.png" style="width: 1200px"/> <p> For position encoding, FFN activation function, and normalization methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration). For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-72B uses a vocabulary of over 150K tokens. It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary. It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization. We randomly selected 1 million document corpus of each language to test and compare the encoding compression rates of different models (with XLM-R, which supports 100 languages, as the base value 1). The specific performance is shown in the figure above. As can be seen, while ensuring the efficient decoding of Chinese, English, and code, Qwen-72B also achieves a high compression rate for many other languages (such as th, he, ar, ko, vi, ja, tr, id, pl, ru, nl, pt, it, de, es, fr etc.), equipping the model with strong scalability as well as high training and inference efficiency in these languages. For pre-training data, on the one hand, Qwen-72B uses part of the open-source generic corpus. On the other hand, it uses a massive amount of accumulated web corpus and high-quality text content. The scale of corpus reaches over 3T tokens after deduplication and filtration, encompassing web text, encyclopedias, books, code, mathematics, and various domain. <br> ## 评测效果(Evaluation) 我们选取了MMLU,C-Eval,GSM8K, MATH, HumanEval, MBPP, BBH, CMMLU等目前较流行的benchmark,对模型的中英知识能力、翻译、数学推理、代码等能力进行综合评测。Qwen-72B模型在所有benchmark上均取得了开源模型中的最优表现。 We selected MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, CMMLU, which are currently popular benchmarks, to test the model’s Chinese and English knowledge capabilities, translation, mathematical reasoning, coding and other capabilities. From the following comprehensive evaluation results, we can see that the Qwen model outperform the similarly sized open-source models on all tasks. | Model | Avg | MMLU | C-Eval | GSM8K | MATH | HumanEval | MBPP | BBH | AGIEval | GaokaoBench | CMMLU | |:-------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:---------:|:--------:|:--------:|:--------:|:--------:|:--------:| | | | 5-shot | 5-shot | 8-shot | 4-shot | 0-shot | 3-shot | 3-shot | 0-shot | 0-shot | 5-shot | | LLaMA2-7B | 24.4 | 46.8 | 32.5 | 16.7 | 3.3 | 12.8 | 20.8 | 38.2 | 21.8 | 18.9 | 31.8 | | LLaMA2-13B | 31.3 | 55.0 | 41.4 | 29.6 | 5.0 | 18.9 | 30.3 | 45.6 | 30.9 | 18.2 | 38.4 | | LLaMA2-70B | 45.7 | 69.7 | 50.1 | 63.5 | 12.0 | 26.2 | 39.6 | 64.9 | 54.2 | 23.3 | 53.6 | | InternLM-20B | 47.2 | 62.1 | 58.8 | 52.6 | 7.9 | 25.6 | 35.6 | 52.5 | 59.0 | 59.0 | 59.0 | | Yi-34B | 58.0 | 76.3 | 81.8 | 67.9 | 15.9 | 26.2 | 38.2 | 66.4 | 56.5 | 68.3 | 82.6 | | XVERSE-65B | - | 70.8 | 68.6 | 60.3 | - | 26.3 | - | - | - | - | - | | **Qwen-7B** | 46.2 | 58.2 | 63.5 | 51.7 | 11.6 | 29.9 | 31.6 | 45.0 | 45.3 | 62.5 | 62.2 | | **Qwen-14B** | 52.7 | 66.3 | 72.1 | 61.3 | 24.8 | 32.3 | 40.8 | 53.4 | 51.9 | 52.7 | 71.0 | | **Qwen-72B** | **66.4** | **77.4** | **83.3** | **78.9** | **35.2** | **35.4** | **52.2** | **67.7** | **62.5** | **87.6** | **83.6** | ### 长序列评测(Long-Context Evaluation) Qwen-72B采用扩展RoPE base的训练方法,支持32k的外推长度,我们使用arXiv数据进行语言建模评测,PPL(越低越好)结果如下: Qwen-72B uses the method of extending RoPE base and supports the extrapolation length of 32k. We use arXiv data for language modeling evaluation. The PPL (lower is better) results are as follows: <table> <tr> <th rowspan="2">Model</th><th colspan="6" align="center">Sequence Length</th> </tr> <tr> </th><th align="center">8192</th><th align="center">16384</th><th align="center">32768</th> </tr> <tr> <td>Qwen-72B</td><td align="center">2.828</td><td align="center">2.734</td><td align="center">2.717</td> </tr> </table> ## 评测复现(Reproduction) 我们提供了评测脚本,方便大家复现模型效果,详见[链接](https://github.com/QwenLM/Qwen/tree/main/eval)。提示:由于硬件和框架造成的舍入误差,复现结果如有小幅波动属于正常现象。 We have provided evaluation scripts to reproduce the performance of our model, details as [link](https://github.com/QwenLM/Qwen/tree/main/eval). <br> ## FAQ 如遇到问题,敬请查阅[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。 If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue. <br> ## 引用 (Citation) 如果你觉得我们的工作对你有帮助,欢迎引用! If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ``` <br> ## 使用协议(License Agreement) 我们的代码和模型权重对学术研究完全开放,并支持商用。请查看[LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)了解具体的开源协议细节。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/Qwen-72B-Chat)申请。 Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) for more details about the license. If you have requirements for commercial use, please fill out the [form](https://dashscope.console.aliyun.com/openModelApply/Qwen-72B-Chat) to apply. <br> ## 联系我们(Contact Us) 如果你想给我们的研发团队和产品团队留言,欢迎加入我们的微信群、钉钉群以及Discord!同时,也欢迎通过邮件(qianwen_opensource@alibabacloud.com)联系我们。 If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to qianwen_opensource@alibabacloud.com.
LoneStriker/falcon-11B-GGUF
LoneStriker
2024-05-13T17:51:35Z
14
3
null
[ "gguf", "en", "de", "es", "fr", "dataset:tiiuae/falcon-refinedweb", "arxiv:2005.14165", "arxiv:2104.09864", "arxiv:1911.02150", "arxiv:2307.08691", "arxiv:2311.16867", "region:us", "conversational" ]
null
2024-05-13T17:34:30Z
--- datasets: - tiiuae/falcon-refinedweb language: - en - de - es - fr inference: false --- # 🚀 Falcon2-11B **Falcon2-11B is an 11B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. The model is made available under the [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html), the permissive Apache 2.0-based software license which includes an [acceptable use policy](https://falconllm-staging.tii.ae/falcon-2-acceptable-use-policy.html) that promotes the responsible use of AI.** *Paper coming soon 😊.* 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost from HF](https://huggingface.co/blog/falcon)! ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-11B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, ) sequences = pipeline( "Can you explain the concepts of Quantum Computing?", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). # Model Card for Falcon2-11B ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae) - **Model type:** Causal decoder-only - **Language(s) (NLP):** English, German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish - **License:** [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html) ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.) ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon2-11B is trained mostly on English, but also German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon2-11B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-11B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto", ) sequences = pipeline( "Can you explain the concepts of Quantum Computing?", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon2-11B was trained over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. It followed a four stage training strategy. The first three stages were focused on increasing the context length, from to 2048 to 4096 and finally to 8192 tokens. The last stage aimed to further enhance performance using only high quality data. Overall, the data sources included RefinedWeb-English, Refined Web-Europe (cs, de, es, fr, it, nl, pl, pt, ro, sv), high quality technical data, code data, and conversational data extracted from public sources. The training stages were as follows: | **Stage** | **Context length** | **Tokens** | |--------------|-----------------|-------------| | Stage 1 | 2048 | 4500 B | | Stage 2 | 4096 | 250 B | | Stage 3 | 8192 | 250 B | | Stage 4 | 8192 | 500 B | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer. ### Training Procedure Falcon2-11B was trained on 1024 A100 40GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=8, PP=1, DP=128) combined with ZeRO and Flash-Attention 2. #### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Max learning rate | 3.7e-4 | Following a linear warm-up, then cosine decay to 1.89e-5 across 4500 B tokens. | | Weight decay | 1e-1 | | | Z-loss | 1e-4 | | | Batch size | Variable | Batch size was gradually increased during the training | #### Speeds, Sizes, Times The model training took roughly two months. ## Evaluation |English Benchmark | **Value** | |--------------------|------------| | ARC-Challenge-25shots | 59.73 | | HellaSwag-10shots | 82.91 | | MMLU-5shots | 58.37 | | Winogrande-5shots | 78.30 | | TruthfulQA-0shot | 52.56 | | GSM8k-5shots | 53.83 | | ARC-Challenge-0shot | 50.17 | | ARC-Easy-0shot | 77.78 | | Hellaswag-0shot | 82.07 | We thank the leaderboard team from HuggingFace for providing an official evaluation of our model on the leaderboard tasks. ## Technical Specifications ### Model Architecture and Objective Falcon2-11B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention-2 ([Dao, 2023](https://arxiv.org/abs/2307.08691)); * **Decoder-block:** parallel attention/MLP. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 60 | | | `d_model` | 4096 | | | `head_dim` | 128 | | | Vocabulary | 65024 | | | Sequence length | 8192 | During stages 3 and 4 | ### Compute Infrastructure #### Hardware Falcon2-11B was trained on AWS SageMaker, using on average 1024 A100 40GB GPUs in 128 p4d instances. #### Software Falcon2-11B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels and FlashAttention-2. More details about the distributed training strategy can be found in [Almazrouei et.al](https://arxiv.org/abs/2311.16867). ## Citation *Paper coming soon* 😊. ## License Falcon2-11B is licenced under [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html), the permissive Apache 2.0-based software license which includes an [acceptable use policy](https://falconllm-staging.tii.ae/falcon-2-acceptable-use-policy.html) that promotes the responsible use of AI. ## Contact falconllm@tii.ae
Jumane555/Rr
Jumane555
2024-05-13T17:43:50Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-13T17:43:50Z
--- license: apache-2.0 ---
Litzy619/PHI30511HMA15H
Litzy619
2024-05-13T17:42:20Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-05-13T07:18:10Z
--- license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - generated_from_trainer model-index: - name: PHI30511HMA15H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PHI30511HMA15H This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6437 | 0.09 | 10 | 0.2765 | | 0.1835 | 0.18 | 20 | 0.1451 | | 0.1607 | 0.27 | 30 | 0.1438 | | 0.139 | 0.36 | 40 | 0.1311 | | 0.1248 | 0.45 | 50 | 0.1177 | | 0.1233 | 0.54 | 60 | 0.1068 | | 0.0966 | 0.63 | 70 | 0.0814 | | 0.0851 | 0.73 | 80 | 0.0705 | | 0.0809 | 0.82 | 90 | 0.0802 | | 0.0744 | 0.91 | 100 | 0.0700 | | 0.0788 | 1.0 | 110 | 0.0774 | | 0.0466 | 1.09 | 120 | 0.0858 | | 0.0576 | 1.18 | 130 | 0.0824 | | 0.0586 | 1.27 | 140 | 0.0736 | | 0.0619 | 1.36 | 150 | 0.0723 | | 0.0588 | 1.45 | 160 | 0.0713 | | 0.0524 | 1.54 | 170 | 0.0810 | | 0.0569 | 1.63 | 180 | 0.0759 | | 0.0502 | 1.72 | 190 | 0.0779 | | 0.0569 | 1.81 | 200 | 0.0679 | | 0.0517 | 1.9 | 210 | 0.0700 | | 0.0466 | 1.99 | 220 | 0.0682 | | 0.0213 | 2.08 | 230 | 0.0821 | | 0.0166 | 2.18 | 240 | 0.1070 | | 0.0177 | 2.27 | 250 | 0.1156 | | 0.02 | 2.36 | 260 | 0.0961 | | 0.0263 | 2.45 | 270 | 0.0826 | | 0.0126 | 2.54 | 280 | 0.0851 | | 0.0181 | 2.63 | 290 | 0.0858 | | 0.0233 | 2.72 | 300 | 0.0839 | | 0.0196 | 2.81 | 310 | 0.0827 | | 0.0153 | 2.9 | 320 | 0.0823 | | 0.0192 | 2.99 | 330 | 0.0823 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.14.1
Litzy619/PHI30511HMA14H
Litzy619
2024-05-13T17:39:53Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-05-13T07:16:53Z
--- license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - generated_from_trainer model-index: - name: PHI30511HMA14H results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PHI30511HMA14H This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6437 | 0.09 | 10 | 0.2765 | | 0.1835 | 0.18 | 20 | 0.1451 | | 0.1607 | 0.27 | 30 | 0.1438 | | 0.139 | 0.36 | 40 | 0.1311 | | 0.1248 | 0.45 | 50 | 0.1177 | | 0.1233 | 0.54 | 60 | 0.1068 | | 0.0966 | 0.63 | 70 | 0.0814 | | 0.0851 | 0.73 | 80 | 0.0705 | | 0.0809 | 0.82 | 90 | 0.0802 | | 0.0744 | 0.91 | 100 | 0.0700 | | 0.0788 | 1.0 | 110 | 0.0774 | | 0.0466 | 1.09 | 120 | 0.0858 | | 0.0576 | 1.18 | 130 | 0.0824 | | 0.0586 | 1.27 | 140 | 0.0736 | | 0.0619 | 1.36 | 150 | 0.0723 | | 0.0588 | 1.45 | 160 | 0.0713 | | 0.0524 | 1.54 | 170 | 0.0810 | | 0.0569 | 1.63 | 180 | 0.0759 | | 0.0502 | 1.72 | 190 | 0.0779 | | 0.0569 | 1.81 | 200 | 0.0679 | | 0.0517 | 1.9 | 210 | 0.0700 | | 0.0466 | 1.99 | 220 | 0.0682 | | 0.0213 | 2.08 | 230 | 0.0821 | | 0.0166 | 2.18 | 240 | 0.1070 | | 0.0177 | 2.27 | 250 | 0.1156 | | 0.02 | 2.36 | 260 | 0.0961 | | 0.0263 | 2.45 | 270 | 0.0826 | | 0.0126 | 2.54 | 280 | 0.0851 | | 0.0181 | 2.63 | 290 | 0.0858 | | 0.0233 | 2.72 | 300 | 0.0839 | | 0.0196 | 2.81 | 310 | 0.0827 | | 0.0153 | 2.9 | 320 | 0.0823 | | 0.0192 | 2.99 | 330 | 0.0823 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.14.1