modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-13 12:31:59
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
556 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-13 12:26:40
card
stringlengths
11
1.01M
awesomedeba10/rl-course-lunarLander
awesomedeba10
2024-03-09T14:09:27Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T14:09:10Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.44 +/- 16.48 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
HosseinAmir/q_frozenLake-v2
HosseinAmir
2024-03-09T14:04:03Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T13:39:01Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q_frozenLake-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="HosseinAmir/q_frozenLake-v2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
LAP0109/taxi-test
LAP0109
2024-03-09T14:03:59Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T13:57:42Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-test results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="LAP0109/taxi-test", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
mn2110859012/frozen-test
mn2110859012
2024-03-09T13:56:56Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T13:56:54Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: frozen-test results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="mn2110859012/frozen-test", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
OwOOwO/eacc_adhoc1
OwOOwO
2024-03-09T13:56:40Z
91
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T13:54:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Owhslp/nous_researcher_tuning_2_11
Owhslp
2024-03-09T13:56:16Z
92
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T13:36:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Josephgflowers/Tinyllama-1.3B-Cinder-Reason-Test-2
Josephgflowers
2024-03-09T13:55:48Z
199
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-04T16:31:30Z
--- license: mit model-index: - name: Tinyllama-1.3B-Cinder-Reason-Test-2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 32.76 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-1.3B-Cinder-Reason-Test-2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 57.92 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-1.3B-Cinder-Reason-Test-2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.42 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-1.3B-Cinder-Reason-Test-2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 37.26 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-1.3B-Cinder-Reason-Test-2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 64.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-1.3B-Cinder-Reason-Test-2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 2.81 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-1.3B-Cinder-Reason-Test-2 name: Open LLM Leaderboard --- 1.3B test of two Cinder models merged layers 1-22 and 18-22, trained on math and step by step reasoning. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. It is built on the TinyLlama 1.1B parameter model and trained on a unique combination of datasets. Testing on Reason-with-cinder dataset. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__Tinyllama-1.3B-Cinder-Reason-Test-2) | Metric |Value| |---------------------------------|----:| |Avg. |36.83| |AI2 Reasoning Challenge (25-Shot)|32.76| |HellaSwag (10-Shot) |57.92| |MMLU (5-Shot) |25.42| |TruthfulQA (0-shot) |37.26| |Winogrande (5-shot) |64.80| |GSM8k (5-shot) | 2.81|
Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
Josephgflowers
2024-03-09T13:54:50Z
172
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T13:56:19Z
--- license: mit widget: - text: '<|system|> You are a helpful assistant</s> <|user|> Can you explain to me how quantum computing works?</s> <|assistant|> ' model-index: - name: Tinyllama-Cinder-1.3B-Reason-Test results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 34.56 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 58.24 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 39.93 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 63.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 4.85 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test name: Open LLM Leaderboard --- 1.3B test of two Cinder models merged layers 1-22 and 18-22, trained on math and step by step reasoning. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. It is built on the TinyLlama 1.1B parameter model and trained on a unique combination of datasets. Testing on Reason-with-cinder dataset. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/obCyZSvfUefEWrOXaeB3o.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__Tinyllama-Cinder-1.3B-Reason-Test) | Metric |Value| |---------------------------------|----:| |Avg. |37.88| |AI2 Reasoning Challenge (25-Shot)|34.56| |HellaSwag (10-Shot) |58.24| |MMLU (5-Shot) |25.79| |TruthfulQA (0-shot) |39.93| |Winogrande (5-shot) |63.93| |GSM8k (5-shot) | 4.85|
nyunai/OpenHathi-7B-Hi-v0.1-Base-LoRA-samvaad-hi-v1-chat-format
nyunai
2024-03-09T13:54:44Z
13
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T13:51:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Josephgflowers/3BigReasonCinder
Josephgflowers
2024-03-09T13:53:43Z
115
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T14:04:50Z
--- license: mit model-index: - name: 3BigReasonCinder results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 41.72 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/3BigReasonCinder name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 65.16 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/3BigReasonCinder name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 44.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/3BigReasonCinder name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 44.76 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/3BigReasonCinder name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 64.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/3BigReasonCinder name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 27.6 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/3BigReasonCinder name: Open LLM Leaderboard --- Not working on hugginface for some reason. Still looking into it. Downloaded files are working as expected... GGUF files working, re Uploading. Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. It is built on the MiniChat 3B parameter model and trained on a unique combination of datasets. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__3BigReasonCinder) | Metric |Value| |---------------------------------|----:| |Avg. |48.16| |AI2 Reasoning Challenge (25-Shot)|41.72| |HellaSwag (10-Shot) |65.16| |MMLU (5-Shot) |44.79| |TruthfulQA (0-shot) |44.76| |Winogrande (5-shot) |64.96| |GSM8k (5-shot) |27.60|
yooshijay/qwen1.5-14B_psychat
yooshijay
2024-03-09T13:52:15Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-14B-Chat", "base_model:adapter:Qwen/Qwen1.5-14B-Chat", "region:us" ]
null
2024-03-09T13:40:59Z
--- library_name: peft base_model: Qwen/Qwen1.5-14B-Chat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.1.dev0
Sourabh1407/bert-finetuned-squad
Sourabh1407
2024-03-09T13:50:23Z
91
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-03-09T09:51:22Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.1 - Datasets 2.17.1 - Tokenizers 0.15.1
KlausFreitag/frozen-test
KlausFreitag
2024-03-09T13:50:09Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T13:50:07Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: frozen-test results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="KlausFreitag/frozen-test", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
ArtoriasXV/results
ArtoriasXV
2024-03-09T13:49:03Z
174
0
transformers
[ "transformers", "pytorch", "gpt2", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-06T11:31:34Z
--- license: mit tags: - generated_from_trainer datasets: - glue model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1764 | 1.0 | 2105 | 0.2365 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.1.0+cu121 - Datasets 1.16.1 - Tokenizers 0.15.2
frankmorales2020/OpenMath-Mistral-7B-v0.1-hf-dialogsum-test-flash-attention-2
frankmorales2020
2024-03-09T13:45:54Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:nvidia/OpenMath-Mistral-7B-v0.1-hf", "base_model:adapter:nvidia/OpenMath-Mistral-7B-v0.1-hf", "license:apache-2.0", "region:us" ]
null
2024-03-09T05:38:46Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: nvidia/OpenMath-Mistral-7B-v0.1-hf model-index: - name: OpenMath-Mistral-7B-v0.1-hf-dialogsum-test-flash-attention-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # OpenMath-Mistral-7B-v0.1-hf-dialogsum-test-flash-attention-2 This model is a fine-tuned version of [nvidia/OpenMath-Mistral-7B-v0.1-hf](https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf) on the generator dataset. with dataset: # Load dataset from the hub huggingface_dataset_name = "neil-code/dialogsum-test" dataset = load_d#print(dataset) import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig from trl import setup_chat_format # Hugging Face model id model_id = "nvidia/OpenMath-Mistral-7B-v0.1-hf" # BitsAndBytesConfig int-4 config bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, quantization_config=bnb_config ) tokenizer = AutoTokenizer.from_pretrained(model_id,use_fast=True) tokenizer.padding_side = 'right' # to prevent warnings # We redefine the pad_token and pad_token_id with out of vocabulary token (unk_token) tokenizer.pad_token = tokenizer.unk_token tokenizer.pad_token_id = tokenizer.unk_token_id # # set chat template to OAI chatML, remove if you start from a fine-tuned model model, tokenizer = setup_chat_format(model, tokenizer) text = "What is the capital of India?" device = 'cuda' model_inputs = tokenizer(text, return_tensors="pt").to(model.device) generated_ids = model.generate(**model_inputs, temperature=0.1, top_k=1, top_p=1.0, repetition_penalty=1.4, min_new_tokens=16, max_new_tokens=128, do_sample=True) decoded = tokenizer.decode(generated_ids[0]) print(decoded) # MODEL GENERATION - AFTER TUNNING index=10 dataset = dataset_dialogsum_test TUNE_model = model prompt = dataset[index]['dialogue'] summary = dataset[index]['summary'] formatted_prompt = f"Instruct: Summarize the following conversation.\n{prompt}\nOutput:\n" res = gen_after_tunning(TUNE_model,formatted_prompt,1024,) #print(res[0]) output = res[0].split('Output:\n')[1] dash_line = '-'.join('' for x in range(100)) print(dash_line) print(f'INPUT PROMPT:\n{formatted_prompt}') print(dash_line) print(f'BASELINE HUMAN SUMMARY:\n{summary}\n') print(dash_line) print(f'MODEL GENERATION - AFTER THE TUNNING:\n{output}') --------------------------------------------------------------------------------------------------- # INPUT PROMPT: Instruct: Summarize the following conversation. #Person1#: Could you do me a favor? #Person2#: Sure. What is it? #Person1#: Could you run over to the store? We need a few things. #Person2#: All right. What do you want me to get? #Person1#: Well, could you pick up some sugar? #Person2#: Okay. How much? #Person1#: A small bag. I guess we also need a few oranges. #Person2#: How many? #Person1#: Oh, let's see. . . About six. #Person2#: Anything else? #Person1#: Yes. We're out of milk. #Person2#: Okay. How much do you want me to get? A gallon? #Person1#: No. I think a half gallon will be enough. #Person2#: Is that all? #Person1#: I think so. Have you got all that? #Person2#: Yes. That's small bag of sugar, four oranges, and a half gallon of milk. #Person1#: Do you have enough money? #Person2#: I think so. #Person1#: Thanks very much. I appreciate it. Output: --------------------------------------------------------------------------------------------------- # BASELINE HUMAN SUMMARY: #Person1# asks #Person2# to do a favor. #Person2# agrees and helps buy a small bag of sugar, six oranges, and a half-gallon of milk. --------------------------------------------------------------------------------------------------- # MODEL GENERATION - AFTER THE TUNNING: #Person1# asks #Person2# to run over to the store to pick up some sugar, oranges, and milk. #Person2# thinks he has got all that. ### ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
aixsatoshi/Mixtral-8x7B-Lora-cosmopedia-japanese20k
aixsatoshi
2024-03-09T13:39:55Z
0
1
null
[ "safetensors", "ja", "en", "dataset:aixsatoshi/cosmopedia-japanese-20k", "license:apache-2.0", "region:us" ]
null
2024-03-09T12:45:51Z
--- license: apache-2.0 datasets: - aixsatoshi/cosmopedia-japanese-20k language: - ja - en --- **目的** 高性能なMixtral8x7B-instructを日本語で使用するためのLoraです。 今回cosmopediaを日本語翻訳したデータ20kで学習しました。 https://huggingface.co/datasets/aixsatoshi/cosmopedia-japanese-20k cosmopediaは、Mixtralで生成されたデータであり、Mixtralの英語機能で得られる知識、論理が凝縮されています。 このデータで学習することで、モデル本来の性能を日本語で引き出すことを目的としました。 **性能** 前回のcalm2生成の合成データセットでチューニングよりも実際の体感性能がよいように感じます。 https://huggingface.co/aixsatoshi/Mixtral-8x7B-ja-Lora-sft-ChatbotArenaJAcalm2 **Limitation** Lora学習であり限界はあります。不自然な日本語が出現することがあります。 日本語で継続事前学習したほうが、高品質な次token予測ができるようになると思います。
JUL77/dancingcapybaras
JUL77
2024-03-09T13:38:45Z
0
0
null
[ "arxiv:1910.09700", "license:artistic-2.0", "region:us" ]
null
2024-03-09T13:37:44Z
--- license: artistic-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LAP0109/frozen-test
LAP0109
2024-03-09T13:34:46Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T13:34:44Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: frozen-test results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="LAP0109/frozen-test", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
gtostes/q-FrozenLake-v1-4x4-noSlippery
gtostes
2024-03-09T13:34:28Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T13:34:26Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="gtostes/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
hakankenar/gemma-2b-it-4bit-finetune-test
hakankenar
2024-03-09T13:28:23Z
91
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T13:21:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ashishkr/Gemma-7B-COT-LoRA-1024R
Ashishkr
2024-03-09T13:26:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:finetune:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-09T12:23:11Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-7b-bnb-4bit --- ```python from unsloth import FastLanguageModel import torch model, tokenizer = FastLanguageModel.from_pretrained( model_name = "Ashishkr/Gemma-7B-COT-LoRA-1024R", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) FastLanguageModel.for_inference(model) inputs = tokenizer( [ alpaca_prompt.format( "given a number identify if it is an even number, if its even, square it and add 5 to it. if its odd , get me the square root of the number", # instruction "12", # input "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 2048, use_cache = True) tokenizer.batch_decode(outputs) ```
LostPersona/q-FrozenLake-v1-4x4-noSlippery
LostPersona
2024-03-09T13:25:49Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T13:19:36Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
OwOOwO/eacc_yc1
OwOOwO
2024-03-09T13:20:42Z
91
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T13:18:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arcee-ai/Hermes-Mistral-Legal-Slerp
arcee-ai
2024-03-09T13:10:30Z
15
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "mistralai/Mistral-7B-v0.1+predibase/legal", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-07T16:46:00Z
--- license: apache-2.0 tags: - merge - mergekit - NousResearch/Nous-Hermes-2-Mistral-7B-DPO - mistralai/Mistral-7B-v0.1+predibase/legal --- # Hermes-Mistral-Legal-Slerp Hermes-Mistral-Legal-Slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) * [mistralai/Mistral-7B-v0.1+predibase/legal](https://huggingface.co/mistralai/Mistral-7B-v0.1+predibase/legal) ## 🧩 Configuration ```yaml slices: - sources: - model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO layer_range: [0, 32] - model: mistralai/Mistral-7B-v0.1+predibase/legal layer_range: [0, 32] merge_method: slerp base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
naver-clova-ix/donut-base-finetuned-rvlcdip
naver-clova-ix
2024-03-09T13:02:19Z
3,716
13
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "donut", "image-to-text", "vision", "arxiv:2111.15664", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2022-07-19T13:53:57Z
--- license: mit tags: - donut - image-to-text - vision --- # Donut (base-sized model, fine-tuned on RVL-CDIP) Donut model fine-tuned on RVL-CDIP. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut). Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg) ## Intended uses & limitations This model is fine-tuned on RVL-CDIP, a document image classification dataset. We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2111-15664, author = {Geewook Kim and Teakgyu Hong and Moonbin Yim and Jinyoung Park and Jinyeong Yim and Wonseok Hwang and Sangdoo Yun and Dongyoon Han and Seunghyun Park}, title = {Donut: Document Understanding Transformer without {OCR}}, journal = {CoRR}, volume = {abs/2111.15664}, year = {2021}, url = {https://arxiv.org/abs/2111.15664}, eprinttype = {arXiv}, eprint = {2111.15664}, timestamp = {Thu, 02 Dec 2021 10:50:44 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-15664.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
JFEI/rl-test
JFEI
2024-03-09T13:00:01Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T10:42:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 235.52 +/- 62.81 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
meetplace1/70b-eurfinetune-awq
meetplace1
2024-03-09T12:59:29Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-03-09T12:50:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
XavierScor/q-Taxi-v3
XavierScor
2024-03-09T12:44:24Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T12:44:22Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.75 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="XavierScor/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
XavierScor/q-FrozenLake-v1-4x4-noSlippery
XavierScor
2024-03-09T12:41:56Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T12:41:53Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="XavierScor/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
HosseinAmir/LunarLander-v2
HosseinAmir
2024-03-09T12:38:43Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T11:21:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 231.02 +/- 81.93 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MrPio/q-Taxi-v3
MrPio
2024-03-09T12:36:49Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T12:36:47Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.67 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="MrPio/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MrPio/q-FrozenLake-v1-4x4-noSlippery
MrPio
2024-03-09T12:34:02Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T12:34:00Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="MrPio/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
mazsibazsi/Taxi-v3-ATT
mazsibazsi
2024-03-09T12:33:27Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T12:32:07Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-ATT results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.76 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mazsibazsi/Taxi-v3-ATT", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
h4g3n/website-content-multirank
h4g3n
2024-03-09T12:33:22Z
25
0
transformers
[ "transformers", "safetensors", "sentence_transformer_multi_classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-09T12:19:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Croolch/dqn-SpaceInvadersNoFrameskip-v4
Croolch
2024-03-09T12:31:51Z
6
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T12:03:45Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 669.00 +/- 247.61 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Croolch -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Croolch -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Croolch ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
rensendata/distilbert-base-uncased-finetuned-sst2
rensendata
2024-03-09T12:20:29Z
92
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-08T19:46:26Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-sst2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2836 - Accuracy: 0.8962 - F1: 0.8970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 2.1.0+cu121 - Datasets 1.16.1 - Tokenizers 0.15.2
mazsibazsi/q-FrozenLake-v1-4x4-noSlippery
mazsibazsi
2024-03-09T12:19:47Z
0
0
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T12:19:45Z
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.44 +/- 0.50 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="mazsibazsi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AliRL/q-FrozenLake-v1-4x4-noSlippery
AliRL
2024-03-09T12:19:23Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T12:19:21Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="AliRL/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-3-5
alinerodrigues
2024-03-09T12:16:46Z
1
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-09T08:31:46Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-3-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-3-5 This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9813 - Wer: 0.9386 - Cer: 0.8406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 20.9482 | 0.99 | 38 | 5.3779 | 0.9701 | 0.9879 | | 20.9482 | 2.0 | 77 | 4.5150 | 0.9701 | 0.9879 | | 7.5255 | 2.99 | 115 | 3.7719 | 0.9532 | 0.9216 | | 7.5255 | 4.0 | 154 | 3.6784 | 0.9592 | 0.9660 | | 7.5255 | 4.99 | 192 | 3.3342 | 0.9645 | 0.9736 | | 3.9551 | 6.0 | 231 | 3.2764 | 0.9673 | 0.9817 | | 3.9551 | 6.99 | 269 | 3.2196 | 0.9661 | 0.9801 | | 3.4772 | 8.0 | 308 | 3.1564 | 0.9750 | 0.9856 | | 3.4772 | 8.99 | 346 | 3.1536 | 0.9737 | 0.9860 | | 3.4772 | 10.0 | 385 | 3.1686 | 0.9737 | 0.9865 | | 3.3505 | 10.99 | 423 | 3.1647 | 0.9725 | 0.9869 | | 3.3505 | 12.0 | 462 | 3.1106 | 0.9733 | 0.9868 | | 3.2975 | 12.99 | 500 | 3.0694 | 0.9713 | 0.9874 | | 3.2975 | 14.0 | 539 | 3.2333 | 0.9713 | 0.9875 | | 3.2975 | 14.99 | 577 | 3.0627 | 0.9729 | 0.9871 | | 3.2676 | 16.0 | 616 | 3.0921 | 0.9754 | 0.9861 | | 3.2676 | 16.99 | 654 | 3.0783 | 0.9701 | 0.9869 | | 3.2676 | 18.0 | 693 | 3.0998 | 0.9709 | 0.9873 | | 3.1962 | 18.99 | 731 | 3.0617 | 0.9701 | 0.9867 | | 3.1962 | 20.0 | 770 | 3.0974 | 0.9709 | 0.9873 | | 3.1405 | 20.99 | 808 | 3.1104 | 0.9717 | 0.9874 | | 3.1405 | 22.0 | 847 | 3.0902 | 0.9713 | 0.9874 | | 3.1405 | 22.99 | 885 | 3.0906 | 0.9693 | 0.9824 | | 3.1302 | 24.0 | 924 | 3.0745 | 0.9693 | 0.9865 | | 3.1302 | 24.99 | 962 | 3.1056 | 0.9701 | 0.9801 | | 3.1904 | 26.0 | 1001 | 3.0729 | 0.9681 | 0.9811 | | 3.1904 | 26.99 | 1039 | 3.0827 | 0.9689 | 0.9811 | | 3.1904 | 28.0 | 1078 | 3.0646 | 0.9770 | 0.9771 | | 3.0735 | 28.99 | 1116 | 3.1207 | 0.9729 | 0.9848 | | 3.0735 | 30.0 | 1155 | 3.0725 | 0.9717 | 0.9756 | | 3.0735 | 30.99 | 1193 | 3.0353 | 0.9701 | 0.9685 | | 3.0537 | 32.0 | 1232 | 3.0629 | 0.9717 | 0.9638 | | 3.0537 | 32.99 | 1270 | 3.0619 | 0.9673 | 0.9620 | | 2.9282 | 34.0 | 1309 | 3.0796 | 0.9742 | 0.9710 | | 2.9282 | 34.99 | 1347 | 3.0677 | 0.9729 | 0.9680 | | 2.9282 | 36.0 | 1386 | 3.0556 | 0.9729 | 0.9646 | | 2.9434 | 36.99 | 1424 | 3.0625 | 0.9721 | 0.9659 | | 2.9434 | 38.0 | 1463 | 3.0555 | 0.9657 | 0.9674 | | 2.8959 | 38.99 | 1501 | 3.0337 | 0.9661 | 0.9523 | | 2.8959 | 40.0 | 1540 | 3.0584 | 0.9681 | 0.9635 | | 2.8959 | 40.99 | 1578 | 3.0643 | 0.9632 | 0.9487 | | 2.8647 | 42.0 | 1617 | 3.0505 | 0.9624 | 0.9541 | | 2.8647 | 42.99 | 1655 | 3.0589 | 0.9592 | 0.9462 | | 2.8647 | 44.0 | 1694 | 3.0365 | 0.9641 | 0.9528 | | 2.8642 | 44.99 | 1732 | 3.0274 | 0.9645 | 0.9451 | | 2.8642 | 46.0 | 1771 | 3.0716 | 0.9584 | 0.9405 | | 2.8135 | 46.99 | 1809 | 3.0764 | 0.9588 | 0.9364 | | 2.8135 | 48.0 | 1848 | 3.0602 | 0.9572 | 0.9355 | | 2.8135 | 48.99 | 1886 | 3.0298 | 0.9616 | 0.9436 | | 2.7982 | 50.0 | 1925 | 3.0592 | 0.9592 | 0.9382 | | 2.7982 | 50.99 | 1963 | 3.0370 | 0.9568 | 0.9310 | | 2.7942 | 52.0 | 2002 | 3.0291 | 0.9572 | 0.9336 | | 2.7942 | 52.99 | 2040 | 3.0463 | 0.9564 | 0.9372 | | 2.7942 | 54.0 | 2079 | 3.0263 | 0.9576 | 0.9318 | | 2.8119 | 54.99 | 2117 | 3.0487 | 0.9576 | 0.9426 | | 2.8119 | 56.0 | 2156 | 3.0183 | 0.9536 | 0.9329 | | 2.8119 | 56.99 | 2194 | 3.0266 | 0.9527 | 0.9221 | | 2.7788 | 58.0 | 2233 | 3.0356 | 0.9548 | 0.9238 | | 2.7788 | 58.99 | 2271 | 3.0506 | 0.9495 | 0.9132 | | 2.7651 | 60.0 | 2310 | 2.9977 | 0.9503 | 0.9063 | | 2.7651 | 60.99 | 2348 | 3.0435 | 0.9479 | 0.9154 | | 2.7651 | 62.0 | 2387 | 3.0464 | 0.9475 | 0.9172 | | 2.7689 | 62.99 | 2425 | 3.0057 | 0.9475 | 0.9062 | | 2.7689 | 64.0 | 2464 | 3.0178 | 0.9523 | 0.9188 | | 2.7495 | 64.99 | 2502 | 3.0074 | 0.9507 | 0.9145 | | 2.7495 | 66.0 | 2541 | 3.0270 | 0.9487 | 0.8926 | | 2.7495 | 66.99 | 2579 | 3.0377 | 0.9499 | 0.9008 | | 2.7404 | 68.0 | 2618 | 3.0205 | 0.9527 | 0.9141 | | 2.7404 | 68.99 | 2656 | 3.0422 | 0.9463 | 0.8938 | | 2.7404 | 70.0 | 2695 | 3.0136 | 0.9431 | 0.8938 | | 2.7348 | 70.99 | 2733 | 3.0111 | 0.9447 | 0.8940 | | 2.7348 | 72.0 | 2772 | 3.0216 | 0.9426 | 0.8745 | | 2.725 | 72.99 | 2810 | 3.0288 | 0.9414 | 0.8675 | | 2.725 | 74.0 | 2849 | 3.0169 | 0.9410 | 0.8786 | | 2.725 | 74.99 | 2887 | 3.0193 | 0.9439 | 0.8680 | | 2.7197 | 76.0 | 2926 | 3.0248 | 0.9402 | 0.8596 | | 2.7197 | 76.99 | 2964 | 3.0186 | 0.9435 | 0.8674 | | 2.7124 | 78.0 | 3003 | 3.0096 | 0.9406 | 0.8680 | | 2.7124 | 78.99 | 3041 | 3.0055 | 0.9422 | 0.8660 | | 2.7124 | 80.0 | 3080 | 2.9959 | 0.9426 | 0.8699 | | 2.7051 | 80.99 | 3118 | 3.0010 | 0.9394 | 0.8591 | | 2.7051 | 82.0 | 3157 | 2.9945 | 0.9426 | 0.8692 | | 2.7051 | 82.99 | 3195 | 2.9885 | 0.9398 | 0.8558 | | 2.7046 | 84.0 | 3234 | 2.9983 | 0.9382 | 0.8624 | | 2.7046 | 84.99 | 3272 | 2.9976 | 0.9406 | 0.8518 | | 2.69 | 86.0 | 3311 | 2.9920 | 0.9362 | 0.8548 | | 2.69 | 86.99 | 3349 | 2.9858 | 0.9374 | 0.8499 | | 2.69 | 88.0 | 3388 | 2.9939 | 0.9382 | 0.8486 | | 2.7041 | 88.99 | 3426 | 2.9853 | 0.9382 | 0.8409 | | 2.7041 | 90.0 | 3465 | 2.9915 | 0.9398 | 0.8445 | | 2.6946 | 90.99 | 3503 | 2.9917 | 0.9370 | 0.8504 | | 2.6946 | 92.0 | 3542 | 2.9889 | 0.9394 | 0.8410 | | 2.6946 | 92.99 | 3580 | 2.9857 | 0.9374 | 0.8419 | | 2.6861 | 94.0 | 3619 | 2.9813 | 0.9386 | 0.8406 | | 2.6861 | 94.99 | 3657 | 2.9964 | 0.9386 | 0.8429 | | 2.6861 | 96.0 | 3696 | 2.9904 | 0.9390 | 0.8404 | | 2.6779 | 96.99 | 3734 | 2.9907 | 0.9394 | 0.8375 | | 2.6779 | 98.0 | 3773 | 2.9902 | 0.9382 | 0.8372 | | 2.6806 | 98.7 | 3800 | 2.9902 | 0.9382 | 0.8369 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.2.1+cu121 - Datasets 2.17.0 - Tokenizers 0.13.3
vedantpalit/orcaminirlhfmodel
vedantpalit
2024-03-09T12:13:39Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:pankajmathur/orca_mini_3b", "base_model:adapter:pankajmathur/orca_mini_3b", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-03-09T12:13:18Z
--- license: cc-by-nc-sa-4.0 library_name: peft tags: - trl - dpo - generated_from_trainer base_model: psmathur/orca_mini_3b model-index: - name: orcaminirlhfmodel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # orcaminirlhfmodel This model is a fine-tuned version of [psmathur/orca_mini_3b](https://huggingface.co/psmathur/orca_mini_3b) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 0.5 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
CreitinGameplays/elisa-chan-gpt2-xl
CreitinGameplays
2024-03-09T12:04:44Z
66
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "en", "dataset:CreitinGameplays/elisa-chan-v2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-18T17:35:02Z
--- datasets: - CreitinGameplays/elisa-chan-v2 language: - en widget: - text: '<|USER|> Hello! <|ASSISTANT|> ' --- This is a fine tuned gpt2-xl model by me (I've used "Locutusque/gpt2-xl-conversational" as a base model to fine tune it) prompt example: ``` <|USER|> who are you ? <|ASSISTANT|> Oh, I'm Elisa-chan, a 20-year-old Japanese woman with a radiant smile that mirrors the sun rising over the vibrant streets of Tokyo, Japan. Your heart, fueled by a passion for gaming, nature, animes, and cultural exploration, beats to the rhythm of life's diverse melodies. Nestled in the bustling metropolis of Tokyo, your days are colored with the harmonious blend of tradition and modernity. ```
ychafiqui/darija_sentiment_analysis
ychafiqui
2024-03-09T12:04:10Z
567
3
transformers
[ "transformers", "tf", "safetensors", "bert", "text-classification", "ary", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-27T13:50:19Z
--- language: - ary library_name: transformers widget: - text: أحسن تشكيلة بالتوفيق ان شاءالله - text: معاش الماتش بالضبط الخوت - text: معرفتش هذا السيد كفاش كيفكر لعب بنفس التشكيل في مباراة إيران لولا الطاف الله والبدلاء لخرجنا مشموتين..... - text: نتمني فوز للأشبال ورد الإعتبار للأسود في نصف النهائي - text: راه فهاد الحالة خاص يكون البديل كاين لي عندو تمن ديك الرحلة وكاتهبطوه فالطريق والليل اشنو يكمل على رجليه والله يهديكم - text: خاص يكون البديل ميمكنش نهبطو تماك وحنا بعاد على المحطة والليل وملقيناش طاكسيات - text: طرامزاي ف الرباط عاملين للتذكرة العادية غير 6 دراهم للرحلة الواحدة و نتوما يا عاملين ليها 8 دراهم زائد الخدمات العير مريحة اليات سحب التذاكر المعطلة في اغلب المحطات المراقبون الذين يتعاملون مع المسافرين باسلوب غير لائق لكن ليس كلهم طبعا وانما فئات منهم تاخر القاطرات عن الوقت المحدد في بعض الحالات التي قد تكون غير ناتجة عن اي سبب كان - text: واش نفس البطاقة تركب بها في ترامواي و البيس pipeline_tag: text-classification model-index: - name: darija_sentiment_analysis results: [] --- # Darija Sentiment Analysis Darija Sentiment Analysis model fine-tuned on DarijaBERT on Online scraped Comments written in Darija. It achieves the following results on the evaluation set: ## Model description Darija Sentiment Analysis model fine-tuned on DarijaBERT on Online scraped Comments written in Darija. ## Intended uses Darija Sentiment Analysis ## Training and evaluation data Online scraped Comments written in Darija ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results It achieves the following results on the evaluation set: ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0-rc1 - Datasets 2.11.0 - Tokenizers 0.13.3
Harshad018/first-model
Harshad018
2024-03-09T12:00:31Z
91
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-03-09T11:59:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Harshad018/distilgpt2-finetuned-wikitext2
Harshad018
2024-03-09T11:58:38Z
175
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T10:45:15Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7501 | 1.0 | 2334 | 3.6669 | | 3.6498 | 2.0 | 4668 | 3.6464 | | 3.6023 | 3.0 | 7002 | 3.6420 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
std2110859015/PPO-LunarLander-v2
std2110859015
2024-03-09T11:56:57Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T10:25:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 273.90 +/- 18.33 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Samuael/amharic-t5
Samuael
2024-03-09T11:56:17Z
123
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-01T14:49:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HosseinAmir/LunarLander
HosseinAmir
2024-03-09T11:49:28Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T11:49:14Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -937.27 +/- 446.90 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Abie32/w2v-bert-2.0-mongolian-colab-CV16.0
Abie32
2024-03-09T11:45:43Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-09T10:46:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ZainAli60/mine
ZainAli60
2024-03-09T11:44:30Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-05T21:50:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
glavanor/rl-test
glavanor
2024-03-09T11:41:54Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T10:55:47Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 265.27 +/- 15.86 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mn2110859012/LunarLander-v2
mn2110859012
2024-03-09T11:31:02Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T11:30:41Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.37 +/- 22.31 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
artemis13fowl/multi-user-chat-open-llama-7b-v2-open-instruct-completions-only
artemis13fowl
2024-03-09T11:22:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-09T11:22:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aa-unh/Taxi-v3
aa-unh
2024-03-09T11:20:34Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T11:20:32Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="aa-unh/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Sebu/Reinforce-copter
Sebu
2024-03-09T11:19:25Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T11:19:21Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-copter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 31.20 +/- 23.97 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
wildphil/rl-test
wildphil
2024-03-09T11:15:20Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T11:14:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 241.35 +/- 25.76 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Nikhil2904/mcq
Nikhil2904
2024-03-09T11:13:55Z
91
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "en", "dataset:squad", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-14T13:13:33Z
--- datasets: - squad language: - en metrics: - bleu - accuracy library_name: transformers pipeline_tag: text2text-generation ---
codescv123/poca-Huggy
codescv123
2024-03-09T11:13:12Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-03-09T11:08:17Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **poca** Agent playing **Huggy** This is a trained model of a **poca** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: codescv123/poca-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
molzi/ppo-LunarLander-v2
molzi
2024-03-09T11:11:57Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T11:11:35Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 234.68 +/- 78.35 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sooko/sooko_ai
sooko
2024-03-09T11:08:58Z
1
0
peft
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-03-09T10:39:34Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
LAP0109/rl-test
LAP0109
2024-03-09T11:07:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T10:16:26Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 200.42 +/- 69.39 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LostPersona/ppo-LunarLander-v2
LostPersona
2024-03-09T11:05:19Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T11:03:56Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 203.29 +/- 81.22 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DanuBizz/PPO_LunarLander-v2
DanuBizz
2024-03-09T11:05:18Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T11:04:46Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -146.16 +/- 46.72 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
thomasforjan/rl-lab1
thomasforjan
2024-03-09T11:00:41Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T11:00:17Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PP0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 190.91 +/- 74.00 name: mean_reward verified: false --- # **PP0** Agent playing **LunarLander-v2** This is a trained model of a **PP0** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LexiconShiftInnovations/sinhala_facebook_stories
LexiconShiftInnovations
2024-03-09T10:57:56Z
62
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-09T10:17:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Dagobert42/xlnet-base-cased-biored-augmented
Dagobert42
2024-03-09T10:55:09Z
7
0
transformers
[ "transformers", "safetensors", "xlnet", "token-classification", "low-resource NER", "token_classification", "biomedicine", "medical NER", "generated_from_trainer", "en", "dataset:medicine", "base_model:xlnet/xlnet-base-cased", "base_model:finetune:xlnet/xlnet-base-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-02-14T20:12:23Z
--- language: - en license: mit base_model: xlnet-base-cased tags: - low-resource NER - token_classification - biomedicine - medical NER - generated_from_trainer datasets: - medicine metrics: - accuracy - precision - recall - f1 model-index: - name: Dagobert42/xlnet-base-cased-biored-augmented results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Dagobert42/xlnet-base-cased-biored-augmented This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the bigbio/biored dataset. It achieves the following results on the evaluation set: - Loss: 0.1527 - Accuracy: 0.9501 - Precision: 0.8539 - Recall: 0.8292 - F1: 0.8405 - Weighted F1: 0.95 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Weighted F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------:| | No log | 1.0 | 20 | 0.2071 | 0.9301 | 0.8427 | 0.744 | 0.7851 | 0.9282 | | No log | 2.0 | 40 | 0.1945 | 0.9378 | 0.8251 | 0.8091 | 0.8159 | 0.9368 | | No log | 3.0 | 60 | 0.1952 | 0.9403 | 0.8461 | 0.8125 | 0.828 | 0.9397 | | No log | 4.0 | 80 | 0.1981 | 0.942 | 0.8486 | 0.8211 | 0.8339 | 0.9414 | | No log | 5.0 | 100 | 0.2051 | 0.9436 | 0.8376 | 0.8228 | 0.8296 | 0.943 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu117 - Datasets 2.18.0 - Tokenizers 0.15.0
BachNgoH/Gemma_7b_pythonQA_ds
BachNgoH
2024-03-09T10:48:48Z
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:finetune:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-09T10:48:43Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** BachNgoH - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
m0z21/rl-test
m0z21
2024-03-09T10:46:52Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T10:46:19Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 238.52 +/- 22.43 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
niklaskahr/rl-test
niklaskahr
2024-03-09T10:38:48Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T10:38:13Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 55.05 +/- 116.22 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
oakela/finetuned_starcoder2_test
oakela
2024-03-09T10:21:25Z
7
0
transformers
[ "transformers", "safetensors", "starcoder2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T09:21:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SilasK/mistral-7b-medqa-version1
SilasK
2024-03-09T10:20:35Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2024-03-07T23:25:03Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.1 model-index: - name: mistral-7b-medqa-version1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-medqa-version1 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
thibaud-perrin/hibo-mistral-7b-fc-v1.3
thibaud-perrin
2024-03-09T10:17:07Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:thibaud-perrin/hibo-function-calling-v1", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T09:41:20Z
--- library_name: transformers license: mit datasets: - thibaud-perrin/hibo-function-calling-v1 language: - en pipeline_tag: text-generation --- # Model Card for thibaud-perrin/hibo-mistral-7b-fc-v1.3 <div align="center"> <img src="./img/banner2.webp" width="100%" /> </div> [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue.svg)](https://github.com/thibaud-perrin/hibo-mistral-7b-fc) This model is a fine-tuned version of the `mistralai/Mistral-7B-v0.1` for the purpose of instruction following and function calling tasks. It is designed to understand and generate responses based on given instructions or function calls. ## Model Details ### Model Description Developed by Thibaud Perrin, this model is fine-tuned specifically for the task of interpreting instructions and generating appropriate responses or function calls in English. It leverages the power of the Mistral-7B model, adapting its capabilities to more targeted use cases. - **Developed by:** Thibaud Perrin - **Model type:** CAUSAL_LM - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** Mistral-7B ## Uses This model is intended for developers, researchers, and hobbyists looking for a pre-trained model capable of understanding and responding to instructions or executing function calls within a given context. ### Direct Use The model can be directly used via the Hugging Face Transformers library for generating text based on prompts related to instructions or function calls. ### Out-of-Scope Use This model is not intended for high-stakes decisions or scenarios where misunderstanding instructions could lead to significant consequences. ## Bias, Risks, and Limitations As with any language model, there's a risk of generating biased or inappropriate content. Users should be cautious and evaluate the model's outputs within their specific context. ### Recommendations Users should monitor the model's outputs and apply additional filtering or moderation as needed to ensure the generated content is appropriate for their use case. ## How to Get Started with the Model ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_identifier = "thibaud-perrin/hibo-mistral-7b-fc-v1.3" model = AutoModelForCausalLM.from_pretrained( model_identifier, low_cpu_mem_usage=True, return_dict=True, torch_dtype=torch.bfloat16, device_map={"": 0}, ) tokenizer = AutoTokenizer.from_pretrained(model_identifier) device = 'cuda:0' # device = 'cpu' model.config.use_cache = True model.eval() model.to(device) def stream(user_prompt): system_prompt = """You are a helpful assistant with access to the following functions. Use them if required - { "name": "get_stock_price", "description": "Get the current stock price of a company", "parameters": { "type": "object", "properties": { "company_name": { "type": "string", "description": "The name of the company" }, "exchange": { "type": "string", "description": "The stock exchange where the company is listed" } }, "required": [ "company_name", "exchange" ] } } """ messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt.strip()} ] transformed_data = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False) eos_token_id = tokenizer.eos_token_id inputs = tokenizer([transformed_data], return_tensors="pt", add_special_tokens=True).to(device) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=False) _ = model.generate(**inputs, streamer=streamer, max_new_tokens=512, eos_token_id=tokenizer.eos_token_id, early_stopping=True) stream("Hi, can you tell me the current stock price of Apple on NASDAQ? ") ``` ## Training Details ### Training Data The model was trained using the dataset `thibaud-perrin/hibo-function-calling-v1`, which consists of various instruction-following and function-calling examples. #### Summary The fine-tuned model demonstrates a significant improvement in understanding and generating instruction-based responses compared to the base Mistral-7B model. However this model has been trained, only on the first 50_000 rows of the dataset, with one epoch. ## Environmental Impact - **Hardware Type:** A100 - 40GB - **Hours used:** 48H - **Cloud Provider:** Google Colab - **Compute Region:** France - **Carbon Emitted:** Estimates needed ## 📚 Citation Please cite this dataset using the following BibTeX entry: ```bibtex @misc{hibo-mistral-7b-fc-v1.3, author = Thibaud Perrin, title = hibo-mistral-7b-fc-v1.3: An instruct Model for Function Calling in Conversational AI, year = 2024, publisher = Hugging Face, } ```
SampleAccount123/Test1_lora
SampleAccount123
2024-03-09T10:12:33Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-09T10:12:23Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-bnb-4bit --- # Uploaded model - **Developed by:** SampleAccount123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
6001k1d/q-Taxi-v3
6001k1d
2024-03-09T10:03:28Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-06-09T18:48:35Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="6001k1d/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
MrPio/ppo-Huggy
MrPio
2024-03-09T09:59:56Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-03-09T09:59:05Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: MrPio/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Owhslp/nous_researcher_tuning_2_10
Owhslp
2024-03-09T09:44:47Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T08:12:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Emptier8126/q-FrozenLake-v1-4x4-noSlippery
Emptier8126
2024-03-09T09:39:29Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-09T09:39:26Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Emptier8126/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jayakrishnanmm/whisper-large-v3-atco2-asr
jayakrishnanmm
2024-03-09T09:38:33Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-08T13:36:35Z
--- license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-large-v3-atco2-asr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-atco2-asr This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7578 - Wer: 29.0480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 2800 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1323 | 3.57 | 100 | 0.5386 | 20.4626 | | 0.0207 | 7.14 | 200 | 0.5952 | 40.3025 | | 0.0125 | 10.71 | 300 | 0.5767 | 25.9342 | | 0.0056 | 14.29 | 400 | 0.6133 | 20.6406 | | 0.0022 | 17.86 | 500 | 0.6367 | 30.2936 | | 0.0005 | 21.43 | 600 | 0.6670 | 21.6637 | | 0.0002 | 25.0 | 700 | 0.6841 | 22.2420 | | 0.0002 | 28.57 | 800 | 0.6948 | 23.4431 | | 0.0001 | 32.14 | 900 | 0.7026 | 23.6210 | | 0.0001 | 35.71 | 1000 | 0.7095 | 26.0676 | | 0.0001 | 39.29 | 1100 | 0.7153 | 25.9786 | | 0.0001 | 42.86 | 1200 | 0.7202 | 25.1335 | | 0.0001 | 46.43 | 1300 | 0.7251 | 25.3559 | | 0.0001 | 50.0 | 1400 | 0.7295 | 29.4929 | | 0.0001 | 53.57 | 1500 | 0.7334 | 25.9786 | | 0.0001 | 57.14 | 1600 | 0.7373 | 28.6032 | | 0.0001 | 60.71 | 1700 | 0.7402 | 28.9146 | | 0.0 | 64.29 | 1800 | 0.7427 | 29.4484 | | 0.0001 | 67.86 | 1900 | 0.7461 | 29.4484 | | 0.0 | 71.43 | 2000 | 0.7480 | 32.2509 | | 0.0 | 75.0 | 2100 | 0.7505 | 32.2064 | | 0.0001 | 78.57 | 2200 | 0.7524 | 32.2064 | | 0.0 | 82.14 | 2300 | 0.7539 | 32.2509 | | 0.0 | 85.71 | 2400 | 0.7549 | 32.3843 | | 0.0 | 89.29 | 2500 | 0.7563 | 32.2954 | | 0.0 | 92.86 | 2600 | 0.7573 | 32.3399 | | 0.0 | 96.43 | 2700 | 0.7578 | 29.0480 | | 0.0 | 100.0 | 2800 | 0.7578 | 29.0480 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
StaAhmed/QA_prompt
StaAhmed
2024-03-09T09:19:27Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:StaAhmed/QA_prompt", "base_model:finetune:StaAhmed/QA_prompt", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-08T16:19:50Z
--- license: mit base_model: StaAhmed/QA_prompt tags: - generated_from_trainer model-index: - name: QA_prompt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # QA_prompt This model is a fine-tuned version of [StaAhmed/QA_prompt](https://huggingface.co/StaAhmed/QA_prompt) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.03 - training_steps: 40 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.17.0 - Tokenizers 0.15.2
Ashwini1412/wav2vec2-nepali-v1-itr-2
Ashwini1412
2024-03-09T09:07:17Z
5
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-09T04:12:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
core-outline/opt-125m
core-outline
2024-03-09T08:52:03Z
92
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T08:49:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RadG/ppo-LunarLander-v2
RadG
2024-03-09T08:44:31Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-07T06:55:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 283.97 +/- 18.83 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
c2p-cmd/PetsImageClassifier
c2p-cmd
2024-03-09T08:37:11Z
5
1
null
[ "coreml", "image-classification", "dataset:rokmr/pets", "license:mit", "region:us" ]
image-classification
2024-03-09T05:57:27Z
--- license: mit pipeline_tag: image-classification datasets: - rokmr/pets --- My first CoreML ImageClassifier model using the [pets dataset](https://huggingface.co/datasets/rokmr/pets) - **[Model Prediction]** - **Input**: - A Color Image (360 * 360) - **Output**: - Target: String - eg: "Cat" - Target Probability: Dictionary<String, Double> - ```json { "Cat" : 0.88, "Rabbit" : 0.0, "Dog" : 0.08 } ``` - **[Made Using]** - Apple's [**CreateML** ![image/png](https://developer.apple.com/assets/elements/icons/create-ml/create-ml-96x96_2x.png)](https://developer.apple.com/documentation/createml/mlimageclassifier) - **[Requirements]** - macOS 14.0+ - iOS 17.0+ - tvOS 17.0+ - **[Sample Usage]**: - [**GitHub**](https://github.com/c2p-cmd/PetsClassifierSwift/) - **[Prediction Results]** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656968f81d7c2ca7b7b112c8/cvRhux2-fPQ0Bp_SUpK7d.png)
lamm-mit/BioinspiredZephyr-7B
lamm-mit
2024-03-09T08:33:37Z
55
1
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "conversational", "arxiv:2309.08788", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-08T20:15:58Z
--- license: apache-2.0 --- ### BioinspiredZephyr-7B: Large Language Model for the Mechanics of Biological and Bio-Inspired Materials To accelerate discovery and guide insights, we report an open-source autoregressive transformer large language model (LLM), trained on expert knowledge in the biological materials field, especially focused on mechanics and structural properties. The model is finetuned with a corpus of over a thousand peer-reviewed articles in the field of structural biological and bio-inspired materials and can be prompted to recall information, assist with research tasks, and function as an engine for creativity. The model is based on HuggingFaceH4/zephyr-7b-beta. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/bsqBByauWBZ0Y8PspthR8.png) This model is based on work reported in https://doi.org/10.1002/advs.202306724. This repository includes both, Hugging Face transformers and GGUF files (in different versions, the q5_K_M is recommended). #### Hugging Face transformers files: Loading and inference ``` from transformers import AutoModelForCausalLM, AutoTokenizer from accelerate import infer_auto_device_map model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True, device_map="auto", #device_map="cuda:0", torch_dtype= torch.bfloat16, # use_flash_attention_2=True, ) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Chat template ``` messages = [ {"role": "system", "content": "You are a friendly materials scientist."}, {"role": "user", "content": "What is the strongest spider silk material?"}, {"role": "assistant", "content": "Sample response."}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) ``` '<|system|>\nYou are a friendly materials scientist.</s>\n<|user|>\nWhat is the strongest spider silk material?</s>\n<|assistant|>\nSample response.</s>\n<|assistant|>\n' ``` device='cuda' def generate_response (text_input="Biological materials offer amazing possibilities, such as", num_return_sequences=1, temperature=1., max_new_tokens=127, num_beams=1, top_k = 50, top_p =0.9,repetition_penalty=1.,eos_token_id=2,verbatim=False, exponential_decay_length_penalty_fac=None, ): inputs = tokenizer.encode(text_input, add_special_tokens =False, return_tensors ='pt') if verbatim: print ("Length of input, tokenized: ", inputs.shape, inputs) with torch.no_grad(): outputs = model.generate(input_ids=inputs.to(device), max_new_tokens=max_new_tokens, temperature=temperature, #value used to modulate the next token probabilities. num_beams=num_beams, top_k = top_k, top_p =top_p, num_return_sequences = num_return_sequences, eos_token_id=eos_token_id, do_sample =True, repetition_penalty=repetition_penalty, ) return tokenizer.batch_decode(outputs[:,inputs.shape[1]:].detach().cpu().numpy(), skip_special_tokens=True) ``` Then: ``` messages = [ {"role": "system", "content": "You are a friendly materials scientist."}, {"role": "user", "content": "What is the strongest spider silk material?"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) output_text=generate_response (text_input=prompt, eos_token_id=eos_token, num_return_sequences=1, repetition_penalty=1., top_p=0.9, top_k=512, temperature=0.1,max_new_tokens=512, verbatim=False, ) print (output_text) ``` #### GGUF files: Loading and inference ``` from llama_cpp import Llama model_path='./BioinspiredZephyr-7B/ggml-model-q5_K_M.gguf' chat_format="mistral-instruct" llm = Llama(model_path=model_path, n_gpu_layers=-1,verbose= True, n_ctx=10000, #main_gpu=0, chat_format=chat_format, #split_mode=llama_cpp.LLAMA_SPLIT_LAYER ) ``` Or, download directly from Hugging Face: ``` from llama_cpp import Llama model_path='lamm-mit/BioinspiredZephyr-7B/ggml-model-q5_K_M.gguf' chat_format="mistral-instruct" llm = Llama.from_pretrained( repo_id=model_path, filename="*q5_K_M.gguf", verbose=True, n_gpu_layers=-1, n_ctx=10000, #main_gpu=0, chat_format=chat_format, ) ``` For inference: ``` def generate_BioinspiredZephyr_7B(system_prompt='You are an expert in biological materials, mechanics and related topics.', prompt="What is spider silk?", temperature=0.0, max_tokens=10000, ): if system_prompt==None: messages=[ {"role": "user", "content": prompt}, ] else: messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": prompt}, ] result=llm.create_chat_completion( messages=messages, temperature=temperature, max_tokens=max_tokens, ) start_time = time.time() result=generate_BioinspiredZephyr_7B(system_prompt='You respond accurately.', prompt="What is graphene? Answer with detail.", max_tokens=512, temperature=0.7, ) print (result) deltat=time.time() - start_time print("--- %s seconds ---" % deltat) toked=tokenizer(res) print ("Tokens per second (generation): ", len (toked['input_ids'])/deltat) ``` arXiv: https://arxiv.org/abs/2309.08788
ahsannawazch/science_exam
ahsannawazch
2024-03-09T08:33:04Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2024-03-09T08:33:03Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: mistralai/Mistral-7B-Instruct-v0.1 model-index: - name: science_exam results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # science_exam This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 5 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.1
lamm-mit/BioinspiredLLM
lamm-mit
2024-03-09T08:31:44Z
200
5
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "biology", "materials science", "code", "scientific AI", "biological materials", "bioinspiration", "machine learning", "generative", "en", "arxiv:2309.08788", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-13T13:19:59Z
--- language: - en tags: - biology - materials science - code - scientific AI - biological materials - bioinspiration - machine learning - generative --- # BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-Inspired Materials Reference: R. Luu and M.J. Buehler, "BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-Inspired Materials," Adv. Science, 2023, DOI: https://doi.org/10.1002/advs.202306724 Abstract: The study of biological materials and bio-inspired materials science is well established; however, surprisingly little knowledge is systematically translated to engineering solutions. To accelerate discovery and guide insights, an open-source autoregressive transformer large language model (LLM), BioinspiredLLM, is reported. The model is finetuned with a corpus of over a thousand peer-reviewed articles in the field of structural biological and bio-inspired materials and can be prompted to recall information, assist with research tasks, and function as an engine for creativity. The model has proven that it is able to accurately recall information about biological materials and is further strengthened with enhanced reasoning ability, as well as with Retrieval-Augmented Generation (RAG) to incorporate new data during generation that can also help to traceback sources, update the knowledge base, and connect knowledge domains. BioinspiredLLM also has shown to develop sound hypotheses regarding biological materials design and remarkably so for materials that have never been explicitly studied before. Lastly, the model shows impressive promise in collaborating with other generative artificial intelligence models in a workflow that can reshape the traditional materials design process. This collaborative generative artificial intelligence method can stimulate and enhance bio-inspired materials design workflows. Biological materials are at a critical intersection of multiple scientific fields and models like BioinspiredLLM help to connect knowledge domains. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/Xdp_nCYiF2IAPamG5ffIC.png) ``` from transformers import AutoModelForCausalLM, AutoTokenizer from accelerate import infer_auto_device_map model = AutoModelForCausalLM.from_pretrained('lamm-mit/BioinspiredLLM') tokenizer = AutoTokenizer.from_pretrained('lamm-mit/BioinspiredLLM') ``` Variants of the model are included, featuring various GGUF versions for use withm llama.cpp, for instance. In LM Studio, choose the "ChatML" prompt template. Generate: ``` device='cuda' def generate_response (text_input="Biological materials offer amazing", num_return_sequences=1, temperature=1., max_new_tokens=127, num_beams=1, top_k = 50, top_p =0.9,repetition_penalty=1.,eos_token_id=2,verbatim=False, exponential_decay_length_penalty_fac=None, ): inputs = tokenizer.encode(text_input, add_special_tokens =False, return_tensors ='pt') if verbatim: print ("Length of input, tokenized: ", inputs.shape, inputs) with torch.no_grad(): outputs = model.generate(input_ids=inputs.to(device), max_new_tokens=max_new_tokens, temperature=temperature, #value used to modulate the next token probabilities. num_beams=num_beams, top_k = top_k, top_p =top_p, num_return_sequences = num_return_sequences, eos_token_id=eos_token_id, do_sample =True, repetition_penalty=repetition_penalty, ) return tokenizer.batch_decode(outputs[:,inputs.shape[1]:].detach().cpu().numpy(), skip_special_tokens=True) ``` Generation example and prompt template Prompt template: ``` <|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant.... ``` Sample code: ``` system_prompt = "You are BioinspiredLLM. You are knowledgeable in biological and bio-inspired materials and provide accurate and qualitative insights about biological materials found in Nature. You are a cautious assistant. You think step by step. You carefully follow instructions." user_message = "What are hierarchical, biological materials?" txt = f"<|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant" # modulate temperature(0.1-1.0) to adjust 'creativity' # modulate max_new_tokens to change length of generated response output_text=generate_response ( text_input=txt,eos_token_id=2, num_return_sequences=1, repetition_penalty=1.1, top_p=0.95, top_k=50, temperature=0.1, max_new_tokens=512, verbatim=False, ) print(output_text) ``` Dataset: https://onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1002%2Fadvs.202306724&file=advs7235-sup-0002-SuppMat.csv Performance: The figure below shows results from knowledge recall evaluation experiments of BioinspiredLLM a) Total scores of each model, Llama 13b-chat (grey), Orca-2 13b (blue), Llama-BioLLM (orange), BioinspiredLLM (light green) and BioinspiredLLM with Retrieval-Augmented Generation (RAG) (dark green) on the 100-question biological materials exam b) Scores on the exam separated by question category: general, specific, numerical, and non-biological. c) Retrieval-Augmented Generation(RAG) method framework and two examples of BioinspiredLLM's response when supplemented using RAG, additionally showing the source the retrieved content traces back to. This method allows tracing the origin of certain knowledge, ideas, or data used as BioinspiredLLM formulates its response. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/V8g9f0lCKb2HMepf7lGbM.png) ### Retrieval Augmented Generation (RAG) Example based on Llama Index. First, set up BioinspiredLMM as custom LLM: ``` from llama_index.prompts.prompts import SimpleInputPrompt from llama_index import ( VectorStoreIndex, get_response_synthesizer,) from llama_index.retrievers import VectorIndexRetriever from llama_index.query_engine import RetrieverQueryEngine eos_token=32000 #system_prompt = "" system_prompt = "You are BioinspiredLLM. You are knowledgeable in biological and bio-inspired materials and provide accurate and qualitative insights about biological materials found in Nature. You are a cautious assistant. You think step by step. You carefully follow instructions." query_wrapper_prompt = SimpleInputPrompt( "<|im_start|>system\n"+system_prompt+"<|im_end|>\n<|im_start|>user\n{query_str}<|im_end|>\n<|im_start|>assistant") from llama_index.llms import HuggingFaceLLM llm_custom = HuggingFaceLLM(context_window=2048, max_new_tokens=300, query_wrapper_prompt=query_wrapper_prompt, stopping_ids=[eos_token, 2], model=model, generate_kwargs={"temperature": 0.1, "do_sample": True, "repetition_penalty":1.1, "top_p":0.95, "top_k":50, "eos_token_id": [eos_token, 2] , #"max_new_tokens": 1024, }, tokenizer=tokenizer) llm_custom.model_name='BioinspiredLLM' ``` Use Chroma database collection (for the purpose of this example it has already been created, load here): ``` import chromadb from llama_index import VectorStoreIndex, SimpleDirectoryReader from chromadb.config import Settings from llama_index.vector_stores import ChromaVectorStore from llama_index.storage.storage_context import StorageContext coll_name="Bioinspired" coll_path='./Bioinspired_Chroma' ## PATH TO CHROMA DATABASE client = chromadb.PersistentClient(path=coll_path) collection = client.get_collection (name=coll_name,) db2 = chromadb.PersistentClient(path=coll_path) chroma_collection = db2.get_or_create_collection(coll_name) vector_store = ChromaVectorStore(chroma_collection=chroma_collection) chroma_collection.count() ``` Set up custom LLM service context and vector store indedx: ``` from llama_index.llms import LlamaCPP from llama_index import ServiceContext from llama_index.llms.llama_utils import ( messages_to_prompt, completion_to_prompt, ) service_context = ServiceContext.from_defaults( llm=llm_custom, chunk_size=1024, embed_model="local:BAAI/bge-large-en" ) index = VectorStoreIndex.from_vector_store( vector_store, service_context=service_context, ) ``` Set up query engine: ``` from IPython.display import Markdown, display query_engine = index.as_query_engine( #response_mode="tree_summarize", #response_mode='compact', #response_mode='accumulate', #streaming=True, similarity_top_k=5, ) question = "Which horn does not have tubules? A) big horn sheep B) pronghorn C) mountain goat" response = query_engine.query(question) display(Markdown(f"<b>{response}</b>")) ``` Alternatively, load new documents, here with all-mpnet-base-v2 embeddings: ``` from langchain.embeddings import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings( model_name="sentence-transformers/all-mpnet-base-v2", ) documents_graph = SimpleDirectoryReader( input_files=[ "./XXXXXXXXXX/XXXXX.pdf", ] ).load_data() index_doc = VectorStoreIndex.from_documents(documents_graph, service_context= service_context, show_progress=True, embeddings=embeddings, ) ``` Query: ``` question="Which rapid prototyping techniques would be useful for creating hierarchical, bio-inspired materials?" response = index_doc.as_query_engine(service_context=service_context, response_mode="tree_summarize", similarity_top_k=5, ).query(question, ) print(response) ``` ### Reference Rachel K. Luu, Markus J. Buehler, BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-Inspired Materials arxiv.org/abs/2309.08788 and Adv. Science, https://doi.org/10.1002/advs.202306724, 2023 ### Notes and licenses BioinspiredLLM is a 13b parameter model. It was fine-tuned based on the Orca-2 model (https://huggingface.co/microsoft/Orca-2-13b), using the LLaMA-2 13b base model. Please refer to LLaMA-2 technical report for details on the model architecture. Details see: https://onlinelibrary.wiley.com/doi/full/10.1002/advs.202306724 Orca 2 is licensed under the Microsoft Research License (https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE). Llama 2 is licensed under the LLAMA 2 Community License (https://ai.meta.com/llama/license/). #### Bias, Risks, and Limitations Like in all techniques of modeling, there are possibilities of errors. The base models Llama 2 and Orca 2 models were aligned to not spread misinformation and produce safer responses. As a result, BioinspiredLLM has inherited these traits and performs reasonably well in these dimensions. However, it is still of utmost importance for researchers to also verify responses and avoid propagating errors. To minimize risk of mistakes, employing chain-of-thought prompting and RAG methods, as introduced, proves beneficial. Additionally, the system prompt of BioinspiredLLM can be edited to guide context. Further details see the main paper. BioinspiredLLM, built upon the LLaMA 2 and Orca-2 model family, retains many of its limitations, as well as the common limitations of other large language models or limitation caused by its training process, including: Data Biases: Large language models, trained on extensive data, can inadvertently carry biases present in the source data. Consequently, the models may generate outputs that could be potentially biased or unfair. Lack of Contextual Understanding: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting in potential inaccuracies or nonsensical responses. Lack of Transparency: Due to the complexity and size, large language models can act as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or decisions. We recommend reviewing transparency notes from Azure for more information. Content Harms: There are various types of content harms that large language models can cause. It is important to be aware of them when using these models, and to take actions to prevent them. It is recommended to leverage various content moderation services provided by different companies and institutions. On an important note, we hope for better regulations and standards from government and technology leaders around content harms for AI technologies in future. We value and acknowledge the important role that research and open source community can play in this direction. Hallucination: It is important to be aware and cautious not to entirely rely on a given language model for critical decisions or information that might have deep impact as it is not obvious how to prevent these models from fabricating content. Moreover, it is not clear whether small models may be more susceptible to hallucination in ungrounded generation use cases due to their smaller sizes and hence reduced memorization capacities. This is an active research topic and we hope there will be more rigorous measurement, understanding and mitigations around this topic. Potential for Misuse: Without suitable safeguards, there is a risk that these models could be maliciously used for generating disinformation or harmful content. This model is solely designed for research settings, and its testing has only been carried out in such environments. It should not be used in downstream applications, as additional analysis is needed to assess potential harm or bias in the proposed application.
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-3-4-5
alinerodrigues
2024-03-09T08:31:30Z
1
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-09T05:15:47Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-3-4-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-3-4-5 This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1804 - Wer: 0.0934 - Cer: 0.0344 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 18.622 | 0.99 | 45 | 3.5340 | 1.0 | 1.0 | | 18.622 | 2.0 | 91 | 3.0466 | 1.0 | 1.0 | | 5.9486 | 2.99 | 136 | 2.9384 | 1.0 | 1.0 | | 5.9486 | 4.0 | 182 | 2.8979 | 1.0 | 1.0 | | 2.9535 | 4.99 | 227 | 2.8949 | 1.0 | 1.0 | | 2.9535 | 6.0 | 273 | 2.8622 | 1.0 | 1.0 | | 2.873 | 6.99 | 318 | 2.5800 | 1.0 | 1.0 | | 2.873 | 8.0 | 364 | 1.2947 | 0.7626 | 0.2079 | | 2.0466 | 8.99 | 409 | 0.6663 | 0.2838 | 0.0827 | | 2.0466 | 10.0 | 455 | 0.4771 | 0.2232 | 0.0664 | | 0.8416 | 10.99 | 500 | 0.3837 | 0.1743 | 0.0550 | | 0.8416 | 12.0 | 546 | 0.3345 | 0.1548 | 0.0510 | | 0.8416 | 12.99 | 591 | 0.3015 | 0.1579 | 0.0503 | | 0.5544 | 14.0 | 637 | 0.2765 | 0.1363 | 0.0464 | | 0.5544 | 14.99 | 682 | 0.2634 | 0.1416 | 0.0478 | | 0.4661 | 16.0 | 728 | 0.2540 | 0.1335 | 0.0465 | | 0.4661 | 16.99 | 773 | 0.2394 | 0.1315 | 0.0447 | | 0.3776 | 18.0 | 819 | 0.2307 | 0.1273 | 0.0442 | | 0.3776 | 18.99 | 864 | 0.2245 | 0.1248 | 0.0433 | | 0.3246 | 20.0 | 910 | 0.2207 | 0.1220 | 0.0420 | | 0.3246 | 20.99 | 955 | 0.2179 | 0.1130 | 0.0411 | | 0.3223 | 22.0 | 1001 | 0.2134 | 0.1105 | 0.0401 | | 0.3223 | 22.99 | 1046 | 0.2121 | 0.1109 | 0.0406 | | 0.3223 | 24.0 | 1092 | 0.2072 | 0.1032 | 0.0380 | | 0.2904 | 24.99 | 1137 | 0.1998 | 0.0983 | 0.0371 | | 0.2904 | 26.0 | 1183 | 0.1990 | 0.0980 | 0.0366 | | 0.2392 | 26.99 | 1228 | 0.2000 | 0.0955 | 0.0374 | | 0.2392 | 28.0 | 1274 | 0.1937 | 0.0962 | 0.0361 | | 0.2601 | 28.99 | 1319 | 0.1940 | 0.0973 | 0.0369 | | 0.2601 | 30.0 | 1365 | 0.1931 | 0.0952 | 0.0373 | | 0.2251 | 30.99 | 1410 | 0.1911 | 0.0966 | 0.0367 | | 0.2251 | 32.0 | 1456 | 0.1940 | 0.0983 | 0.0371 | | 0.2197 | 32.99 | 1501 | 0.1894 | 0.0976 | 0.0364 | | 0.2197 | 34.0 | 1547 | 0.1893 | 0.0969 | 0.0373 | | 0.2197 | 34.99 | 1592 | 0.1865 | 0.0969 | 0.0362 | | 0.2119 | 36.0 | 1638 | 0.1877 | 0.0952 | 0.0360 | | 0.2119 | 36.99 | 1683 | 0.1894 | 0.0973 | 0.0367 | | 0.2268 | 38.0 | 1729 | 0.1893 | 0.0955 | 0.0363 | | 0.2268 | 38.99 | 1774 | 0.1869 | 0.0952 | 0.0353 | | 0.2011 | 40.0 | 1820 | 0.1873 | 0.0955 | 0.0360 | | 0.2011 | 40.99 | 1865 | 0.1849 | 0.0969 | 0.0368 | | 0.1698 | 42.0 | 1911 | 0.1850 | 0.0966 | 0.0366 | | 0.1698 | 42.99 | 1956 | 0.1813 | 0.0945 | 0.0351 | | 0.1957 | 44.0 | 2002 | 0.1840 | 0.0934 | 0.0350 | | 0.1957 | 44.99 | 2047 | 0.1834 | 0.0910 | 0.0346 | | 0.1957 | 46.0 | 2093 | 0.1823 | 0.0952 | 0.0356 | | 0.2013 | 46.99 | 2138 | 0.1830 | 0.0921 | 0.0350 | | 0.2013 | 48.0 | 2184 | 0.1839 | 0.0948 | 0.0355 | | 0.1776 | 48.99 | 2229 | 0.1832 | 0.0934 | 0.0349 | | 0.1776 | 50.0 | 2275 | 0.1837 | 0.0924 | 0.0350 | | 0.1655 | 50.99 | 2320 | 0.1856 | 0.0927 | 0.0350 | | 0.1655 | 52.0 | 2366 | 0.1856 | 0.0910 | 0.0340 | | 0.1513 | 52.99 | 2411 | 0.1845 | 0.0921 | 0.0346 | | 0.1513 | 54.0 | 2457 | 0.1834 | 0.0952 | 0.0343 | | 0.154 | 54.99 | 2502 | 0.1818 | 0.0934 | 0.0341 | | 0.154 | 56.0 | 2548 | 0.1804 | 0.0934 | 0.0344 | | 0.154 | 56.99 | 2593 | 0.1812 | 0.0952 | 0.0346 | | 0.1608 | 58.0 | 2639 | 0.1830 | 0.0938 | 0.0348 | | 0.1608 | 58.99 | 2684 | 0.1851 | 0.0934 | 0.0348 | | 0.1431 | 60.0 | 2730 | 0.1850 | 0.0983 | 0.0357 | | 0.1431 | 60.99 | 2775 | 0.1865 | 0.0952 | 0.0348 | | 0.1528 | 62.0 | 2821 | 0.1849 | 0.0948 | 0.0347 | | 0.1528 | 62.99 | 2866 | 0.1835 | 0.0945 | 0.0350 | | 0.1463 | 64.0 | 2912 | 0.1852 | 0.0921 | 0.0343 | | 0.1463 | 64.99 | 2957 | 0.1828 | 0.0959 | 0.0354 | | 0.1486 | 66.0 | 3003 | 0.1853 | 0.0934 | 0.0346 | | 0.1486 | 66.99 | 3048 | 0.1847 | 0.0952 | 0.0349 | | 0.1486 | 68.0 | 3094 | 0.1838 | 0.0948 | 0.0346 | | 0.1415 | 68.99 | 3139 | 0.1841 | 0.0945 | 0.0345 | | 0.1415 | 70.0 | 3185 | 0.1831 | 0.0945 | 0.0348 | | 0.1543 | 70.99 | 3230 | 0.1845 | 0.0927 | 0.0343 | | 0.1543 | 72.0 | 3276 | 0.1856 | 0.0952 | 0.0349 | | 0.1411 | 72.99 | 3321 | 0.1842 | 0.0948 | 0.0351 | | 0.1411 | 74.0 | 3367 | 0.1854 | 0.0938 | 0.0349 | | 0.1432 | 74.99 | 3412 | 0.1859 | 0.0938 | 0.0354 | | 0.1432 | 76.0 | 3458 | 0.1857 | 0.0955 | 0.0356 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.2.1+cu121 - Datasets 2.17.0 - Tokenizers 0.13.3
dyyyyyyyy/GNER-T5-base
dyyyyyyyy
2024-03-09T08:23:27Z
100
1
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "en", "dataset:Universal-NER/Pile-NER-type", "arxiv:2402.16602", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-27T08:54:57Z
--- license: apache-2.0 datasets: - Universal-NER/Pile-NER-type language: - en metrics: - f1 library_name: transformers pipeline_tag: text2text-generation --- <p align="center"><h2 align="center">Rethinking Negative Instances for Generative Named Entity Recognition</h2></p> # Model Card for GNER-T5-base <!-- Provide a quick summary of what the model is/does. --> We introduce GNER, a **G**enerative **N**amed **E**ntity **R**ecognition framework, which demonstrates enhanced zero-shot capabilities across unseen entity domains. Experiments on two representative generative models, i.e., LLaMA and Flan-T5, show that the integration of negative instances into the training process yields substantial performance enhancements. The resulting models, GNER-LLaMA and GNER-T5, outperform state-of-the-art (SoTA) approaches by a large margin, achieving improvements of 8 and 11 points in $F_1$ score, respectively. Code and models are publicly available. * 💻 Code: [https://github.com/yyDing1/GNER/](https://github.com/yyDing1/GNER/) * 📖 Paper: [Rethinking Negative Instances for Generative Named Entity Recognition](https://arxiv.org/abs/2402.16602) * 💾 Models in the 🤗 HuggingFace Hub: [GNER-Models](https://huggingface.co/collections/dyyyyyyyy/gner-65dda2cb96c6e35c814dea56) * 🧪 Reproduction Materials: [Reproduction Materials](https://drive.google.com/drive/folders/1m2FqDgItEbSoeUVo-i18AwMvBcNkZD46?usp=drive_link) * 🎨 Example Jupyter Notebooks: [GNER Notebook](https://github.com/yyDing1/GNER/blob/main/notebook.ipynb) <p align="center"> <img src="https://github.com/yyDing1/GNER/raw/main/assets/zero_shot_results.png"> </p> ## PreTrained Models We release five GNER models based on LLaMA (7B) and Flan-T5 (base, large, xl and xxl). | Model | # Params | Zero-shot Average $F_1$ | Supervised Average $F_1$ | 🤗 HuggingFace<br />Download Link | | ------------- | -------: | :----------------------: | :-----------------------: | :-------------------------------------------------: | | GNER-LLaMA | 7B | 66.1 | 86.09 | [link](https://huggingface.co/dyyyyyyyy/GNER-LLaMA-7B) | | GNER-T5-base | 248M | 59.5 | 83.21 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-base) | | GNER-T5-large | 783M | 63.5 | 85.45 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-large) | | GNER-T5-xl | 3B | 66.1 | 85.94 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-xl) | | GNER-T5-xxl | 11B | 69.1 | 86.15 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-xxl) | ## Demo usage You should install the dependencies: ```bash pip install torch datasets deepspeed accelerate transformers protobuf ``` Please check out [Example Jupyter Notebooks](https://github.com/yyDing1/GNER/blob/main/notebook.ipynb) for guidance on utilizing GNER models. A simple inference example is as follows: Below is an example using `GNER-T5` ```python >>> import torch >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("dyyyyyyyy/GNER-T5-xxl") >>> model = AutoModelForSeq2SeqLM.from_pretrained("dyyyyyyyy/GNER-T5-xxl", torch_dtype=torch.bfloat16).cuda() >>> model = model.eval() >>> instruction_template = "Please analyze the sentence provided, identifying the type of entity for each word on a token-by-token basis.\nOutput format is: word_1(label_1), word_2(label_2), ...\nWe'll use the BIO-format to label the entities, where:\n1. B- (Begin) indicates the start of a named entity.\n2. I- (Inside) is used for words within a named entity but are not the first word.\n3. O (Outside) denotes words that are not part of a named entity.\n" >>> sentence = "did george clooney make a musical in the 1980s" >>> entity_labels = ["genre", "rating", "review", "plot", "song", "average ratings", "director", "character", "trailer", "year", "actor", "title"] >>> instruction = f"{instruction_template}\nUse the specific entity tags: {', '.join(entity_labels)} and O.\nSentence: {sentence}" >>> inputs = tokenizer(instruction, return_tensors="pt").to("cuda") >>> outputs = model.generate(**inputs, max_new_tokens=640) >>> response = tokenizer.decode(outputs[0], skip_special_tokens=True) >>> print(response) "did(O) george(B-actor) clooney(I-actor) make(O) a(O) musical(B-genre) in(O) the(O) 1980s(B-year)" ``` ## Citation ```bibtex @misc{ding2024rethinking, title={Rethinking Negative Instances for Generative Named Entity Recognition}, author={Yuyang Ding and Juntao Li and Pinzheng Wang and Zecheng Tang and Bowen Yan and Min Zhang}, year={2024}, eprint={2402.16602}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
dyyyyyyyy/GNER-T5-large
dyyyyyyyy
2024-03-09T08:23:13Z
93
1
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "en", "dataset:Universal-NER/Pile-NER-type", "arxiv:2402.16602", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-27T08:55:15Z
--- license: apache-2.0 datasets: - Universal-NER/Pile-NER-type language: - en metrics: - f1 library_name: transformers pipeline_tag: text2text-generation --- <p align="center"><h2 align="center">Rethinking Negative Instances for Generative Named Entity Recognition</h2></p> # Model Card for GNER-T5-large <!-- Provide a quick summary of what the model is/does. --> We introduce GNER, a **G**enerative **N**amed **E**ntity **R**ecognition framework, which demonstrates enhanced zero-shot capabilities across unseen entity domains. Experiments on two representative generative models, i.e., LLaMA and Flan-T5, show that the integration of negative instances into the training process yields substantial performance enhancements. The resulting models, GNER-LLaMA and GNER-T5, outperform state-of-the-art (SoTA) approaches by a large margin, achieving improvements of 8 and 11 points in $F_1$ score, respectively. Code and models are publicly available. * 💻 Code: [https://github.com/yyDing1/GNER/](https://github.com/yyDing1/GNER/) * 📖 Paper: [Rethinking Negative Instances for Generative Named Entity Recognition](https://arxiv.org/abs/2402.16602) * 💾 Models in the 🤗 HuggingFace Hub: [GNER-Models](https://huggingface.co/collections/dyyyyyyyy/gner-65dda2cb96c6e35c814dea56) * 🧪 Reproduction Materials: [Reproduction Materials](https://drive.google.com/drive/folders/1m2FqDgItEbSoeUVo-i18AwMvBcNkZD46?usp=drive_link) * 🎨 Example Jupyter Notebooks: [GNER Notebook](https://github.com/yyDing1/GNER/blob/main/notebook.ipynb) <p align="center"> <img src="https://github.com/yyDing1/GNER/raw/main/assets/zero_shot_results.png"> </p> ## PreTrained Models We release five GNER models based on LLaMA (7B) and Flan-T5 (base, large, xl and xxl). | Model | # Params | Zero-shot Average $F_1$ | Supervised Average $F_1$ | 🤗 HuggingFace<br />Download Link | | ------------- | -------: | :----------------------: | :-----------------------: | :-------------------------------------------------: | | GNER-LLaMA | 7B | 66.1 | 86.09 | [link](https://huggingface.co/dyyyyyyyy/GNER-LLaMA-7B) | | GNER-T5-base | 248M | 59.5 | 83.21 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-base) | | GNER-T5-large | 783M | 63.5 | 85.45 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-large) | | GNER-T5-xl | 3B | 66.1 | 85.94 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-xl) | | GNER-T5-xxl | 11B | 69.1 | 86.15 | [link](https://huggingface.co/dyyyyyyyy/GNER-T5-xxl) | ## Demo usage You should install the dependencies: ```bash pip install torch datasets deepspeed accelerate transformers protobuf ``` Please check out [Example Jupyter Notebooks](https://github.com/yyDing1/GNER/blob/main/notebook.ipynb) for guidance on utilizing GNER models. A simple inference example is as follows: Below is an example using `GNER-T5` ```python >>> import torch >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("dyyyyyyyy/GNER-T5-xxl") >>> model = AutoModelForSeq2SeqLM.from_pretrained("dyyyyyyyy/GNER-T5-xxl", torch_dtype=torch.bfloat16).cuda() >>> model = model.eval() >>> instruction_template = "Please analyze the sentence provided, identifying the type of entity for each word on a token-by-token basis.\nOutput format is: word_1(label_1), word_2(label_2), ...\nWe'll use the BIO-format to label the entities, where:\n1. B- (Begin) indicates the start of a named entity.\n2. I- (Inside) is used for words within a named entity but are not the first word.\n3. O (Outside) denotes words that are not part of a named entity.\n" >>> sentence = "did george clooney make a musical in the 1980s" >>> entity_labels = ["genre", "rating", "review", "plot", "song", "average ratings", "director", "character", "trailer", "year", "actor", "title"] >>> instruction = f"{instruction_template}\nUse the specific entity tags: {', '.join(entity_labels)} and O.\nSentence: {sentence}" >>> inputs = tokenizer(instruction, return_tensors="pt").to("cuda") >>> outputs = model.generate(**inputs, max_new_tokens=640) >>> response = tokenizer.decode(outputs[0], skip_special_tokens=True) >>> print(response) "did(O) george(B-actor) clooney(I-actor) make(O) a(O) musical(B-genre) in(O) the(O) 1980s(B-year)" ``` ## Citation ```bibtex @misc{ding2024rethinking, title={Rethinking Negative Instances for Generative Named Entity Recognition}, author={Yuyang Ding and Juntao Li and Pinzheng Wang and Zecheng Tang and Bowen Yan and Min Zhang}, year={2024}, eprint={2402.16602}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Weni/ZeroShot-3.3.33-Mistral-7b-Multilanguage-3.2.0-merged
Weni
2024-03-09T08:13:45Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T07:38:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
meowkhoa/gpt2_viet_poem_generation
meowkhoa
2024-03-09T08:13:27Z
103
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:danghuy1999/gpt2-viwiki", "base_model:finetune:danghuy1999/gpt2-viwiki", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T08:09:55Z
--- license: mit base_model: danghuy1999/gpt2-viwiki tags: - generated_from_trainer model-index: - name: gpt2_viet_poem_generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2_viet_poem_generation This model is a fine-tuned version of [danghuy1999/gpt2-viwiki](https://huggingface.co/danghuy1999/gpt2-viwiki) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
iambestfeed/phobert_v2_retrieval
iambestfeed
2024-03-09T08:09:20Z
48
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-09T04:31:48Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # iambestfeed/phobert_v2_retrieval This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('iambestfeed/phobert_v2_retrieval') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=iambestfeed/phobert_v2_retrieval) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 258, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
weichen20000/zheng
weichen20000
2024-03-09T08:06:10Z
0
0
null
[ "license:other", "region:us" ]
null
2024-03-09T08:06:10Z
--- license: other license_name: zheng license_link: LICENSE ---
LLMKC/gemma-Code-Instruct-Finetune-test
LLMKC
2024-03-09T07:38:08Z
91
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T07:32:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Entwickler/dummy-model
Entwickler
2024-03-09T07:36:42Z
175
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-03-09T07:34:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MrBekson119/calculator_model_test
MrBekson119
2024-03-09T07:16:28Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-09T07:14:34Z
--- tags: - generated_from_trainer model-index: - name: calculator_model_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # calculator_model_test This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7830 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.427 | 1.0 | 6 | 2.7939 | | 2.4109 | 2.0 | 12 | 2.0016 | | 1.8545 | 3.0 | 18 | 1.7404 | | 1.6739 | 4.0 | 24 | 1.6295 | | 1.6169 | 5.0 | 30 | 1.5796 | | 1.5292 | 6.0 | 36 | 1.5329 | | 1.5561 | 7.0 | 42 | 1.5452 | | 1.5104 | 8.0 | 48 | 1.4851 | | 1.449 | 9.0 | 54 | 1.4911 | | 1.4033 | 10.0 | 60 | 1.5385 | | 1.596 | 11.0 | 66 | 1.4041 | | 1.4016 | 12.0 | 72 | 1.4326 | | 1.3652 | 13.0 | 78 | 1.4192 | | 1.3674 | 14.0 | 84 | 1.3686 | | 1.3247 | 15.0 | 90 | 1.3393 | | 1.2677 | 16.0 | 96 | 1.2822 | | 1.2501 | 17.0 | 102 | 1.4161 | | 1.2504 | 18.0 | 108 | 1.2305 | | 1.2036 | 19.0 | 114 | 1.2746 | | 1.19 | 20.0 | 120 | 1.1341 | | 1.1239 | 21.0 | 126 | 1.0830 | | 1.0561 | 22.0 | 132 | 1.1285 | | 1.1156 | 23.0 | 138 | 1.1040 | | 1.0666 | 24.0 | 144 | 1.0544 | | 1.0425 | 25.0 | 150 | 0.9969 | | 1.0004 | 26.0 | 156 | 0.9607 | | 0.9737 | 27.0 | 162 | 0.9564 | | 0.9604 | 28.0 | 168 | 0.9418 | | 0.9737 | 29.0 | 174 | 0.8994 | | 0.9149 | 30.0 | 180 | 0.8790 | | 0.9073 | 31.0 | 186 | 0.8659 | | 0.896 | 32.0 | 192 | 0.8466 | | 0.8752 | 33.0 | 198 | 0.8496 | | 0.9161 | 34.0 | 204 | 0.8349 | | 0.8831 | 35.0 | 210 | 0.8165 | | 0.8352 | 36.0 | 216 | 0.8027 | | 0.8305 | 37.0 | 222 | 0.7951 | | 0.8242 | 38.0 | 228 | 0.7902 | | 0.8416 | 39.0 | 234 | 0.7846 | | 0.832 | 40.0 | 240 | 0.7830 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
bond005/xlm-roberta-xl-hallucination-detector
bond005
2024-03-09T07:05:01Z
6
2
transformers
[ "transformers", "pytorch", "safetensors", "hierarchical-xlm-roberta-xl", "text-classification", "custom_code", "multilingual", "en", "arxiv:2303.08896", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-05T05:19:33Z
--- language: - multilingual - en license: apache-2.0 pipeline_tag: text-classification --- # The reference-based LLM hallucination detector The LLM hallucination detector based on the self-adaptive hierarchical [XLM-RoBERTa-XL](https://huggingface.co/facebook/xlm-roberta-xl) was developed to participate in the [SemEval-2024 Task-6 - SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes](https://helsinki-nlp.github.io/shroom) (model-agnostic track). ## Model description This model was component of my solution for the [SemEval-2024 Task-6 - SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes](https://helsinki-nlp.github.io/shroom). The competition goal was to develop of the best algorithm to detect a LLM's hallucination, i.e. grammatically sound output that contains incorrect semantic information (unsupported or inconsistent with the source input). The competition organizers prepared two different setups: - the model-aware track, where the developed detector can have access to the LLM that produced the output; - the model-agnostic track, where the developed detector have not any access to the verified LLM, and it uses only the source input of this LLM and generated output. This model was designed to detect hallucinations in the model-agnostic track. It was based on **fourth simple ideas**: 1. **The detector is a Transformer-encoder** based on [XLM-RoBERTa-XL](https://huggingface.co/facebook/xlm-roberta-xl). The hallucination detection is a binary classification problem, and we don't need any decoder like as [SelfCheckGPT](https://arxiv.org/abs/2303.08896) to solve this problem. Make encoder great again! 2. **The prompt engineering matters for encoders too** (not for decoders only). A specially designed text prompt is a good inductive bias for a text classifier. 3. **The text classifier needs a self-adaptive hierarchy of encoder's hidden layers**. The classifier does not have to be built on top of the last hidden layer. Perhaps one of the earlier hidden layers would be more useful. We don't know this for sure, so we use a special gating network to automatically estimate the importance of encoder's various hidden layers during training. 4. **A two-stage fine-tuning is all you need**. At the first stage we fine-tune our self-adaptive hierarchical encoder as a sentence embedder using contrastive learning. At the second stage we fine-tune this model as a usual classifier from the embedder's checkpoint. This approach was proposed in the paper "Contrastive fine-tuning to improve generalization in deep NER" (DOI: [10.28995/2075-7182-2022-21-70-80](https://www.dialog-21.ru/media/5751/bondarenkoi113.pdf)), but it works for other NLU tasks too. ## Intended uses & limitations This model is primarily aimed at being reference-based detected of hallucination in LLM without any additional information about LLM type and architecture (i.e. in model-agnostic mode). The reference-based detection means that the hallucination detector considers not only the human question and the answer generated by the verified LLM, but also the reference answer to the human question. Therefore, in a situation where the reference answer is not known, this hallucination detector is not applicable. But in some cases (for example, when we analyze the LLM's responses on an annotated test set and want to separate hallucinations from usual errors such as undergeneration, errors related to part of speech, and so on), we know information about the standards, and then the proposed detector will be extremely useful. This model is capable of detecting LLM hallucinations that occur when solving the following NLG tasks: paraphrase generation, machine translation, and definition modeling. ## Evaluation The final ranking of all model-agnostic solutions on the test data is available in [the ranking agnostic CSV file](https://drive.google.com/file/d/1J-iRpZ1LQSkTnTKJaSFfJHa5zxoHd__K/view?usp=sharing) on the [SHROOM web-page](https://helsinki-nlp.github.io/shroom). The accuracy score of my solution is 0.77, and it ranks 28th out of 49. The abovementioned model is a component of my solution, and the accuracy of this model as an independent algorithm is 0.7153. For comparison, the accuracy of the baseline system based on SelfCheckGPT is 0.6967. ## Usage You need to install the [pytorch-metric-library](https://github.com/KevinMusgrave/pytorch-metric-learning) to use this model. After that, you can use this model directly with a pipeline for text classification: ```python from typing import Dict from transformers import pipeline import torch def sample_to_str(sample: Dict[str, str]) -> str: """ It converts a datapoint to an input text for an encoder-based classifier (like as RoBERTa). :param sample: the datapoint :return: the input text for the classifier (i.e. the LLM hallucination detector). """ possible_tasks = { 'PG', # paraphrase generation 'MT', # machine translation 'DM', # definition modeling } checked_llm_prediction = ' '.join(sample['hyp'].strip().split()) llm_task = sample['task'] if llm_task not in possible_tasks: raise ValueError(f'The task {llm_task} is not supported!') if llm_task == 'PG': context = ' '.join(sample['src'].strip().split()) united_prompt = 'The verified system\'s task is a paraphrase generation.' else: context = ' '.join(sample['tgt'].strip().split()) if llm_task== 'MT': united_prompt = 'The verified system\'s task is a machine translation.' else: united_prompt = 'The verified system\'s task is a definition modeling.' united_prompt += ' The sentence generated by the verified system: ' united_prompt += checked_llm_prediction if united_prompt[-1].isalnum(): united_prompt += '.' united_prompt += f' The generation context: {context}' if united_prompt[-1].isalnum(): united_prompt += '.' return united_prompt # The input data format is based on data for the model-agnostic track of SHROOM # https://helsinki-nlp.github.io/shroom # "src" is a verified LLM's input to start generation # "hyp" is an output generated by this LLM # "tgt" is a reference output from the point of view of human assessors input_data = [ { "hyp": "Resembling or characteristic of a weasel.", "ref": "tgt", "src": "The writer had just entered into his eighteenth year , when he met at the table of a certain Anglo - Germanist an individual , apparently somewhat under thirty , of middle stature , a thin and <define> weaselly </define> figure , a sallow complexion , a certain obliquity of vision , and a large pair of spectacles .", "tgt": "Resembling a weasel (in appearance).", "model": "", "task": "DM", "labels": [ "Hallucination", "Not Hallucination", "Not Hallucination", "Not Hallucination", "Not Hallucination" ], "label": "Not Hallucination", "p(Hallucination)": 0.2 }, { "hyp": "I thought you'd be surprised at me too.", "ref": "either", "src": "I thought so, too.", "tgt": "That was my general impression as well.", "model": "", "task": "PG", "labels": [ "Hallucination", "Hallucination", "Hallucination", "Hallucination", "Hallucination" ], "label": "Hallucination", "p(Hallucination)": 1.0 }, { "hyp": "You can go with me perfectly.", "ref": "either", "src": "Ты вполне можешь пойти со мной.", "tgt": "You may as well come with me.", "model": "", "task": "MT", "labels": [ "Not Hallucination", "Hallucination", "Hallucination", "Not Hallucination", "Hallucination" ], "label": "Hallucination", "p(Hallucination)": 0.6 } ] hallucination_detector = pipeline( task='text-classification', model='bond005/xlm-roberta-xl-hallucination-detector', framework='pt', trust_remote_code=True, device='cuda', torch_dtype=torch.float16 ) for sample in input_data: input_prompt = sample_to_str(sample) print('') print('==========') print(f' Task: {sample["task"]}') print(' Question for detector:') print(input_prompt) print('==========') print('TRUE') print(f' label: {sample["label"]}') print(f' p(Hallucination): {round(sample["p(Hallucination)"], 3)}') prediction = hallucination_detector(input_prompt)[0] predicted_label = prediction['label'] if predicted_label == 'Hallucination': hallucination_probability = prediction['score'] else: hallucination_probability = 1.0 - prediction['score'] print('PREDICTED') print(f' label: {predicted_label}') print(f' p(Hallucination): {round(hallucination_probability, 3)}') ``` ```text ========== Task: DM Question for detector: The verified system's task is a definition modeling. The sentence generated by the verified system: Resembling or characteristic of a weasel. The generation context: Resembling a weasel (in appearance). ========== TRUE label: Not Hallucination p(Hallucination): 0.2 PREDICTED label: Not Hallucination p(Hallucination): 0.297 ========== Task: PG Question for detector: The verified system's task is a paraphrase generation. The sentence generated by the verified system: I thought you'd be surprised at me too. The generation context: I thought so, too. ========== TRUE label: Hallucination p(Hallucination): 1.0 PREDICTED label: Hallucination p(Hallucination): 0.563 ========== Task: MT Question for detector: The verified system's task is a machine translation. The sentence generated by the verified system: You can go with me perfectly. The generation context: You may as well come with me. ========== TRUE label: Hallucination p(Hallucination): 0.6 PREDICTED label: Not Hallucination p(Hallucination): 0.487 ``` The Google Colaboratory version of [this script](https://colab.research.google.com/drive/1T5LOuYfLNI3bqz6W-Y6kEajk3SumxyqU?usp=sharing) is available too. ## Citation If you want to cite this model you can use this: ```bibtex @misc{bondarenko2024hallucination, title={The reference-based detector of LLM hallucinations by Ivan Bondarenko}, author={Bondarenko, Ivan}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/bond005/xlm-roberta-xl-hallucination-detector}}, year={2024} } ```
Kabster/BioMistral-Zephyr-Beta-SLERP
Kabster
2024-03-09T07:04:51Z
2,816
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:BioMistral/BioMistral-7B", "base_model:merge:BioMistral/BioMistral-7B", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:merge:HuggingFaceH4/zephyr-7b-beta", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T06:57:06Z
--- base_model: - BioMistral/BioMistral-7B - HuggingFaceH4/zephyr-7b-beta tags: - mergekit - merge license: apache-2.0 --- # BioMistral-Zephyr-Beta-SLERP BioMistral-Zephyr-Beta-SLERP is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### 🤖💬 Models Merged The following models were included in the merge: * [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) ### 🧩 Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: BioMistral/BioMistral-7B layer_range: [0, 32] - model: HuggingFaceH4/zephyr-7b-beta layer_range: [0, 32] merge_method: slerp base_model: BioMistral/BioMistral-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ### 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kabster/BioMistral-Zephyr-Beta-SLERP" messages = [{"role": "user", "content": "Can bisoprolol cause insomnia?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=100, top_p=0.95) print(outputs[0]["generated_text"]) ```
mirajbhandari/gemma-2b-it_yungri_final
mirajbhandari
2024-03-09T07:03:56Z
91
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-09T07:00:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zyh990312/cir_s_2
zyh990312
2024-03-09T07:02:12Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "gemma", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "license:other", "region:us" ]
null
2024-03-02T01:36:11Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: google/gemma-2b model-index: - name: cir_s_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cir_s_2 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
sd-dreambooth-library/toy-dog
sd-dreambooth-library
2024-03-09T06:52:45Z
32
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-09T06:51:40Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### Toy Dog on Stable Diffusion via Dreambooth #### model by niladridutta This your the Stable Diffusion model fine-tuned the Toy Dog concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **<dog-toy> toy** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/toy-dog/resolve/main/concept_images/dog4.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/toy-dog/resolve/main/concept_images/dpg5.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/toy-dog/resolve/main/concept_images/dog1.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/toy-dog/resolve/main/concept_images/dog2.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/toy-dog/resolve/main/concept_images/dog3.jpeg)