modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-31 18:27:20
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
530 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-31 18:27:03
card
stringlengths
11
1.01M
Estefanox17/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
Estefanox17
2023-06-29T16:20:14Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-29T02:56:06Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0264 - F1: 0.5411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0858 | 1.0 | 766 | 1.0418 | 0.5271 | | 0.9613 | 2.0 | 1532 | 1.0264 | 0.5411 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
chenvl/lora
chenvl
2023-06-29T16:20:06Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-06-29T16:02:11Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nnpy/Instruct-blip-v2
nnpy
2023-06-29T16:17:19Z
125
1
transformers
[ "transformers", "pytorch", "blip", "image-text-to-text", "image-to-text", "dataset:MMInstruction/M3IT", "endpoints_compatible", "region:us" ]
image-to-text
2023-06-19T08:42:43Z
--- datasets: - MMInstruction/M3IT pipeline_tag: image-to-text --- This model is fintuned on instruction dataset using `SalesForce/blip-imagecaptioning-base` model. ## Usage: ``` from transformers import BlipProcessor, BlipForConditionalGeneration import torch from PIL import Image processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") if processor.tokenizer.eos_token is None: processor.tokenizer.eos_token = '<|eos|>' model = BlipForConditionalGeneration.from_pretrained("prasanna2003/Instruct-blip-v2") image = Image.open('file_name.jpg').convert('RGB') prompt = """Instruction: Answer the following input according to the image. Input: Describe this image. output: """ inputs = processor(image, prompt, return_tensors="pt") output = model.generate(**inputs, max_length=100) print(tokenizer.decode(output[0])) ```
akamsali/distilbert-base-uncased-finetuned-squad
akamsali
2023-06-29T16:15:28Z
33
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-29T02:35:03Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1612 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2187 | 1.0 | 5533 | 1.1539 | | 0.9613 | 2.0 | 11066 | 1.1313 | | 0.7456 | 3.0 | 16599 | 1.1612 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1 - Datasets 2.13.1 - Tokenizers 0.13.3
cleanrl/Hopper-v2-ddpg_continuous_action_jax-seed1
cleanrl
2023-06-29T16:14:54Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Hopper-v2", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T16:14:30Z
--- tags: - Hopper-v2 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Hopper-v2 type: Hopper-v2 metrics: - type: mean_reward value: 1122.33 +/- 432.49 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **Hopper-v2** This is a trained model of a DDPG agent playing Hopper-v2. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action_jax]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id Hopper-v2 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Hopper-v2-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py curl -OL https://huggingface.co/cleanrl/Hopper-v2-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Hopper-v2-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id Hopper-v2 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'env_id': 'Hopper-v2', 'exp_name': 'ddpg_continuous_action_jax', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
SHENMU007/neunit_BASE_V10.13
SHENMU007
2023-06-29T16:12:14Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "1.1.0", "generated_from_trainer", "zh", "dataset:facebook/voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-06-29T13:10:58Z
--- language: - zh license: mit tags: - 1.1.0 - generated_from_trainer datasets: - facebook/voxpopuli model-index: - name: SpeechT5 TTS Dutch neunit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS Dutch neunit This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
cleanrl/Walker2d-v2-ddpg_continuous_action_jax-seed1
cleanrl
2023-06-29T15:52:10Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Walker2d-v2", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T15:51:47Z
--- tags: - Walker2d-v2 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Walker2d-v2 type: Walker2d-v2 metrics: - type: mean_reward value: 2051.42 +/- 1088.20 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **Walker2d-v2** This is a trained model of a DDPG agent playing Walker2d-v2. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action_jax]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id Walker2d-v2 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Walker2d-v2-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py curl -OL https://huggingface.co/cleanrl/Walker2d-v2-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Walker2d-v2-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id Walker2d-v2 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'env_id': 'Walker2d-v2', 'exp_name': 'ddpg_continuous_action_jax', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
nickprock/setfit-italian-hate-speech
nickprock
2023-06-29T15:48:18Z
48
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "feature-extraction", "setfit", "sentence-transformers", "text-classification", "hate speech", "it", "arxiv:2209.11055", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2023-03-23T08:28:01Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification - hate speech pipeline_tag: text-classification language: - it metrics: - accuracy library_name: transformers --- # setfit-italian-hate-speech This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model detects the hate speech for italian language: * 1 --> is hate speech * 0 --> isn't hate speech ## Dataset `setfit-italian-hate-speech` is trained on [HaSpeeDe-FB](http://twita.di.unito.it/dataset/haspeede) dataset. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("nickprock/setfit-italian-hate-speech") # Run inference preds = model(["Lei è una brutta bugiarda!", "Mi piace la pizza"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` ### Dataset Citation ```bibtex @inproceedings{VignaCDPT17, title = {Hate Me, Hate Me Not: Hate Speech Detection on Facebook}, author = {Fabio Del Vigna and Andrea Cimino and Felice dell'Orletta and Marinella Petrocchi and Maurizio Tesconi}, year = {2017}, url = {http://ceur-ws.org/Vol-1816/paper-09.pdf}, researchr = {https://researchr.org/publication/VignaCDPT17}, cites = {0}, citedby = {0}, pages = {86-95}, booktitle = {Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), Venice, Italy, January 17-20, 2017}, editor = {Alessandro Armando and Roberto Baldoni and Riccardo Focardi}, volume = {1816}, series = {CEUR Workshop Proceedings}, publisher = {CEUR-WS.org}, } ```
mtreviso/roberta-base-imdb
mtreviso
2023-06-29T15:34:05Z
111
0
transformers
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "text-classification", "en", "dataset:imdb", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-17T00:26:22Z
--- language: - en thumbnail: null tags: - text-classification license: mit datasets: - imdb metrics: null duplicated_from: aychang/roberta-base-imdb --- Forked from: https://huggingface.co/aychang/roberta-base-imdb # IMDB Sentiment Task: roberta-base ## Model description A simple base roBERTa model trained on the "imdb" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Use pipeline from transformers import pipeline model_name = "aychang/roberta-base-imdb" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."]) ``` ##### AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/roberta-base-imdb" texts = ["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ``` #### Limitations and bias This is minimal language model trained on a benchmark dataset. ## Training data IMDB https://huggingface.co/datasets/imdb ## Training procedure #### Hardware One V100 #### Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', overwrite_output_dir=False, num_train_epochs=2, per_device_train_batch_size=8, per_device_eval_batch_size=8, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', fp16=False, eval_steps=800, save_steps=300000 ) ``` ## Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.94668, 'eval_f1': array([0.94603457, 0.94731017]), 'eval_loss': 0.2578844428062439, 'eval_precision': array([0.95762642, 0.93624502]), 'eval_recall': array([0.93472, 0.95864]), 'eval_runtime': 244.7522, 'eval_samples_per_second': 102.144} ```
zhao-mm/mpt-30b-instruct-test
zhao-mm
2023-06-29T15:33:16Z
115
0
transformers
[ "transformers", "pytorch", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "custom_code", "arxiv:2205.14135", "arxiv:2108.12409", "license:cc-by-sa-3.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-29T15:28:31Z
--- license: cc-by-sa-3.0 datasets: - competition_math - conceptofmind/cot_submix_original/cot_gsm8k - knkarthick/dialogsum - mosaicml/dolly_hhrlhf - duorc - tau/scrolls/qasper - emozilla/quality - scrolls/summ_screen_fd - spider tags: - Composer - MosaicML - llm-foundry inference: false --- # MPT-30B-Instruct MPT-30B-Instruct is a model for short-form instruction following. It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider). * License: _CC-By-SA-3.0_ This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture. ## Model Date June 22, 2023 ## Model License CC-By-SA-3.0 ## Documentation * [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ### Example Question/Instruction **Bespokenizer46** > I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform. > Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important. > End the email with a friendly inquiry about Phyllis's family. **MPT-30B-Instruct**: > Phyllis - > I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in. > LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy. > They also provide tools to easily connect to and use the model in your daily workflow. > I think you'd really enjoy speaking with their founder, we can set up a call if you're interested. > Also, I know it's been a tough year for your family, how are things? > Best, > Your Friend ## How to Use Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package. It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-30b-instruct', trust_remote_code=True ) ``` To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-30b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-30b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b') ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline with torch.autocast('cuda', dtype=torch.bfloat16): inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda') outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # or using the HF pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ``` ### Formatting This model was trained on data formatted as follows: ```python def format_prompt(instruction): template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n" return template.format(instruction=instruction) example = "Tell me a funny joke.\nDon't make it too funny though." fmt_ex = format_prompt(instruction=example) ``` In the above example, `fmt_ex` is ready to be tokenized and sent through the model. ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 29.95B | |n_layers | 48 | | n_heads | 64 | | d_model | 7168 | | vocab size | 50432 | | sequence length | 8192 | ## Data Mix The model was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | |-------------|----------------------------|------------| | competition_math | 1.6 M | 3.66% | | cot_gsm8k | 3.36 M | 7.67% | | dialogsum | 0.1 M | 0.23% | | dolly_hhrlhf | 5.89 M | 13.43% | | duorc | 7.8 M | 17.80% | | qasper | 8.72 M | 19.90% | | quality | 11.29 M | 25.78% | | scrolls/summ_screen_fd | 4.97 M | 11.33% | | spider | 0.089 M | 0.20% | ## PreTraining Data For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b). The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ### Training Configuration This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-30B-Instruct was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## Acknowledgements This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-30B: Raising the bar for open-source foundation models}, year = {2023}, url = {www.mosaicml.com/blog/mpt-30b}, note = {Accessed: 2023-06-22}, urldate = {2023-06-22} } ```
DarkRodry/Taxi-v3-tutorial
DarkRodry
2023-06-29T15:24:33Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T15:24:31Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-tutorial results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.72 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="DarkRodry/Taxi-v3-tutorial", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
gbellamy/ppo-LunarLander-v2-unit8
gbellamy
2023-06-29T15:16:23Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T15:15:43Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 15.75 +/- 51.40 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 500000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'gbellamy/ppo-LunarLander-v2-unit8' 'batch_size': 512 'minibatch_size': 128} ```
DarkRodry/q-FrozenLake-v1-4x4-noSlippery
DarkRodry
2023-06-29T15:15:15Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T15:15:13Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="DarkRodry/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
sdadas/polish-longformer-base-4096
sdadas
2023-06-29T15:00:36Z
409
2
transformers
[ "transformers", "pytorch", "safetensors", "longformer", "fill-mask", "pl", "license:lgpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-08T17:54:28Z
--- license: lgpl-3.0 language: - pl ---
GabrielCaido/ppo-Huggy
GabrielCaido
2023-06-29T14:50:49Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-06-29T14:50:38Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: GabrielCaido/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ymkgr/Re_Stage-Tsukisaka_Sayu
ymkgr
2023-06-29T14:50:19Z
0
2
null
[ "anime", "game", "license:creativeml-openrail-m", "region:us" ]
null
2023-06-29T12:16:16Z
--- license: creativeml-openrail-m metrics: - character tags: - anime - game --- Model type: LoRA --- Model Details: - from Japanese multimedia project: Re:Stage! - Unit: KiRaRe - character name: Tsukisaka Sayu./来自 日本多媒体企划:Re:Stage! - 组合:KiRaRe - 角色名:月坂纱由。 - LoRA weight: 0.6-1 - Trigger Words: - stage dress: tsukisaka sayu\(re:stage\), green eyes, side ponytail, long hair, purple hair, dress\(tssa\), necklace\(tssa\), thighhighs\(tssa\), star white scrunchie\(tssa\), star hair ornament\(tssa\), wrist cuffs\(tssa\), boots\(tssa\), - school uniform: tsukisaka sayu\(re:stage\), green eyes, side ponytail, long hair, purple hair, sailor collar, blue skirt, - The symbol \ should be added before "(" and ")". It is not possible to directly input them together in the file introduction.(Only supplementary to the trigger words mentioned above) - Optional trigger words: bowtie, "school uniform and serafuku" have the same effect as "sailor color". "Hair ribbon" is her usual trigger word for hair ribbon. When the default hairstyle is side ponytail, there is no need to add it. If you want her to continue using her usual hair ribbon on hairstyles such as "twintails", you can add it. - If you want to change her hairstyle, it's best to add 'ponytail' to 'Negative prompt'. - I don't know English and I'm not very good at using the Hugging Face website. I also use a translation for the description - Demo:![01349-822748059-masterpiece, best quality, 1girl, large breasts, tsukisaka sayu_(re_stage_), green eyes, very long twintails, very long hair, pu.png](https://cdn-uploads.huggingface.co/production/uploads/647c4972d2da33779cb77652/962Za5n8BU2agCToIBT38.png) ![121356-2233999081-masterpiece, best quality, 1girl, tsukisaka sayu_(re_stage_), green eyes, side ponytail, long hair, purple hair, dress_(tssa_),.png](https://cdn-uploads.huggingface.co/production/uploads/647c4972d2da33779cb77652/wZ00fNymVv_ZzNgy_xJ0z.png) ![121524-1292003020-masterpiece, best quality, 1girl, large breasts, tsukisaka sayu_(re_stage_), green eyes, straight hair, long hair, purple hair,.png](https://cdn-uploads.huggingface.co/production/uploads/647c4972d2da33779cb77652/TvpDMgnDao0c5Jr9-cnnU.png) --- I also made LoRA for "shikimiya mana", but I plan to update its version soon, so I will upload it later. Afterwards, I also want to gradually produce LoRA for all members of "Re: Stage!". Please comply with regulations.
VeronicaVAX/nubes
VeronicaVAX
2023-06-29T14:45:59Z
0
0
asteroid
[ "asteroid", "text-to-image", "es", "en", "dataset:fka/awesome-chatgpt-prompts", "dataset:tiiuae/falcon-refinedweb", "dataset:GAIR/lima", "dataset:cerebras/SlimPajama-627B", "dataset:QingyiSi/Alpaca-CoT", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:OpenAssistant/oasst1", "dataset:anon8231489123/ShareGPT_Vicuna_unfiltered", "dataset:databricks/databricks-dolly-15k", "dataset:TigerResearch/pretrain_zh", "license:artistic-2.0", "region:us" ]
text-to-image
2023-06-29T14:41:42Z
--- license: artistic-2.0 datasets: - fka/awesome-chatgpt-prompts - tiiuae/falcon-refinedweb - GAIR/lima - cerebras/SlimPajama-627B - QingyiSi/Alpaca-CoT - WizardLM/WizardLM_evol_instruct_V2_196k - OpenAssistant/oasst1 - anon8231489123/ShareGPT_Vicuna_unfiltered - databricks/databricks-dolly-15k - TigerResearch/pretrain_zh language: - es - en metrics: - code_eval library_name: asteroid pipeline_tag: text-to-image ---
Sam12111/bert-base-multilingual-cased-finetuned-MeIA-AnalisisLoboSolitario
Sam12111
2023-06-29T14:42:22Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T17:45:19Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: bert-base-multilingual-cased-finetuned-MeIA-AnalisisLoboSolitario results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-MeIA-AnalisisLoboSolitario This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0984 - F1: 0.4993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1513 | 1.0 | 1149 | 1.1027 | 0.4981 | | 1.0513 | 2.0 | 2298 | 1.0984 | 0.4993 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
BuddhikaWeerasinghe/videomae-base-finetuned-ucf101-subset-nimeshbuddhika
BuddhikaWeerasinghe
2023-06-29T14:42:05Z
60
0
transformers
[ "transformers", "pytorch", "tensorboard", "videomae", "video-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-06-29T14:08:18Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset-nimeshbuddhika results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset-nimeshbuddhika This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1097 - Accuracy: 0.9731 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.4285 | 0.1 | 150 | 2.1186 | 0.2032 | | 0.9154 | 1.1 | 300 | 1.1913 | 0.6043 | | 0.4498 | 2.1 | 450 | 0.3770 | 0.8824 | | 0.3012 | 3.1 | 600 | 0.3904 | 0.8663 | | 0.1458 | 4.1 | 750 | 0.8938 | 0.7968 | | 0.0464 | 5.1 | 900 | 0.6181 | 0.8556 | | 0.0274 | 6.1 | 1050 | 0.7182 | 0.8075 | | 0.0022 | 7.1 | 1200 | 0.4525 | 0.8717 | | 0.0302 | 8.1 | 1350 | 0.3172 | 0.9091 | | 0.0166 | 9.1 | 1500 | 0.2898 | 0.8984 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
asti339/emotions2
asti339
2023-06-29T14:42:03Z
4
2
tf-keras
[ "tf-keras", "image-classification", "region:us" ]
image-classification
2023-06-24T13:33:43Z
--- pipeline_tag: image-classification ---
Malaika/Reinforce-Pixelcopter-PLE-v0-Test3
Malaika
2023-06-29T14:36:10Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T14:36:07Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0-Test3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 19.90 +/- 17.51 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AAOBA/ppo-Huggy
AAOBA
2023-06-29T14:32:27Z
17
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-06-29T13:52:11Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: chikoto/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Taurine511/distilbert-base-uncased-finetuned-emotion
Taurine511
2023-06-29T14:28:50Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-29T13:44:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9165 - name: F1 type: f1 value: 0.9167227221544503 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2222 - Accuracy: 0.9165 - F1: 0.9167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8 | 1.0 | 250 | 0.3127 | 0.9005 | 0.8977 | | 0.2446 | 2.0 | 500 | 0.2222 | 0.9165 | 0.9167 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
amm297/aux
amm297
2023-06-29T14:18:38Z
34
0
peft
[ "peft", "text-generation", "endpoints_compatible", "region:us" ]
text-generation
2023-06-29T11:22:02Z
--- library_name: peft pipeline_tag: text-generation --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0.dev0
yacine-djm/binary_sanity_check
yacine-djm
2023-06-29T14:04:43Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-29T07:40:45Z
--- license: mit tags: - generated_from_trainer model-index: - name: binary_sanity_check results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # binary_sanity_check This model is a fine-tuned version of [Raccourci/fairguest-bert](https://huggingface.co/Raccourci/fairguest-bert) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
jcnecio/rl_course_vizdoom_health_gathering_supreme
jcnecio
2023-06-29T13:55:20Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T13:55:15Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.93 +/- 5.92 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r jcnecio/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
dar-tau/Reinforce-Pixelcopter-PLE-v0
dar-tau
2023-06-29T13:38:53Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T13:24:04Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 15.80 +/- 8.77 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
iammartian0/sentiment_analysis_model
iammartian0
2023-06-29T13:35:40Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-29T12:30:38Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: sentiment_analysis_model results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9314 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment_analysis_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1964 - Accuracy: 0.9314 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2524 | 1.0 | 782 | 0.1844 | 0.9291 | | 0.1377 | 2.0 | 1564 | 0.1964 | 0.9314 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ricardoseifert/alpaca-bitcoin-tweets-sentiment
ricardoseifert
2023-06-29T13:28:39Z
3
0
peft
[ "peft", "region:us" ]
null
2023-06-29T13:28:38Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
DarkAirforce/Taxi-v3
DarkAirforce
2023-06-29T13:25:27Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T13:25:24Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="DarkAirforce/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
dar-tau/Reinforce-CartPole-v1
dar-tau
2023-06-29T13:09:20Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T12:58:10Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 465.40 +/- 74.22 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
sleepynlp/Reinforce-CartPole-v1-LEO
sleepynlp
2023-06-29T13:02:28Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T13:02:19Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1-LEO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 468.60 +/- 94.20 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
internetoftim/dinov2-base-eurosat
internetoftim
2023-06-29T12:59:18Z
130
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-21T23:33:55Z
# Fine-tuning Details # To fine-tuning Details [nielsr/dinov2-base](https://huggingface.co/nielsr/dinov2-base) # pre-trained model from which to fine-tune [Graphcore/vit-base-ipu](https://huggingface.co/Graphcore/vit-base-ipu_) # config specific to the IPU (Used POD4) Using: [image_classification-dinov2-base.ipynb](https://huggingface.co/internetoftim/dinov2-base-eurosat/blob/main/image_classification-dinov2-base.ipynb) Run the notebook in Gradient, make sure to upload the .ipynb file from this repository: [![Run on Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3YOs4Js) Poplar SDK: v3.2.1 Dataset: load a custom dataset from local/remote files or folders using the ImageFolder feature option 1: local/remote files (supporting the following formats: tar, gzip, zip, xz, rar, zstd) url = "https://madm.dfki.de/files/sentinel/EuroSAT.zip" files = list(Path(dataset_dir).rglob("EuroSAT.zip")) [![Ask for help in GC Slack ](https://img.shields.io/badge/Slack-Join%20Graphcore's%20Community-blue?style=flat-square&logo=slack)](https://www.graphcore.ai/join-community)
sheduele/models228
sheduele
2023-06-29T12:53:55Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-29T12:48:44Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: models228 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # models228 This model is a fine-tuned version of [IlyaGusev/rubert_ext_sum_gazeta](https://huggingface.co/IlyaGusev/rubert_ext_sum_gazeta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2456 - Precision: 0.7118 - Recall: 0.7530 - F1: 0.7319 - Accuracy: 0.9205 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 172 | 0.2966 | 0.6210 | 0.6494 | 0.6349 | 0.9149 | | No log | 2.0 | 344 | 0.2456 | 0.7118 | 0.7530 | 0.7319 | 0.9205 | ### Framework versions - Transformers 4.30.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
BolaOK/first_Generator
BolaOK
2023-06-29T12:52:42Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-29T12:52:38Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
ahishamm/vit-huge-modified-augmented-ph2-patch-14
ahishamm
2023-06-29T12:50:06Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-29T12:27:18Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy - recall - f1 - precision model-index: - name: vit-huge-modified-augmented-ph2-patch-14 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-huge-modified-augmented-ph2-patch-14 This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/Modified_Augmented_PH2_db_sharpened dataset. It achieves the following results on the evaluation set: - Loss: 0.0012 - Accuracy: 1.0 - Recall: 1.0 - F1: 1.0 - Precision: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.0996 | 0.29 | 50 | 0.1378 | 0.9366 | 0.9366 | 0.9366 | 0.9366 | | 0.0096 | 0.59 | 100 | 0.0509 | 0.9743 | 0.9743 | 0.9743 | 0.9743 | | 0.0049 | 0.88 | 150 | 0.0085 | 0.9983 | 0.9983 | 0.9983 | 0.9983 | | 0.0029 | 1.18 | 200 | 0.0037 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0022 | 1.47 | 250 | 0.0028 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0018 | 1.76 | 300 | 0.0022 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0015 | 2.06 | 350 | 0.0021 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0013 | 2.35 | 400 | 0.0017 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0011 | 2.65 | 450 | 0.0015 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0011 | 2.94 | 500 | 0.0014 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.001 | 3.24 | 550 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0009 | 3.53 | 600 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0009 | 3.82 | 650 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Allenpai/alpaca-200
Allenpai
2023-06-29T12:22:16Z
2
0
peft
[ "peft", "region:us" ]
null
2023-06-29T12:21:29Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
Shrawani/squad-bloom-3b-v1
Shrawani
2023-06-29T12:18:34Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-29T12:18:31Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
ahishamm/vit-large-augmented-ph2-patch-32
ahishamm
2023-06-29T12:11:45Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-29T11:55:41Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy - recall - f1 - precision model-index: - name: vit-large-augmented-ph2-patch-32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-augmented-ph2-patch-32 This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/Augmented_PH2_db_sharpened dataset. It achieves the following results on the evaluation set: - Loss: 0.5737 - Accuracy: 0.8701 - Recall: 0.8701 - F1: 0.8701 - Precision: 0.8701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.0405 | 0.36 | 50 | 0.6853 | 0.8342 | 0.8342 | 0.8342 | 0.8342 | | 0.0107 | 0.72 | 100 | 0.8199 | 0.8256 | 0.8256 | 0.8256 | 0.8256 | | 0.0338 | 1.09 | 150 | 0.5737 | 0.8701 | 0.8701 | 0.8701 | 0.8701 | | 0.0026 | 1.45 | 200 | 0.6008 | 0.8684 | 0.8684 | 0.8684 | 0.8684 | | 0.0019 | 1.81 | 250 | 0.6275 | 0.8735 | 0.8735 | 0.8735 | 0.8735 | | 0.0016 | 2.17 | 300 | 0.6488 | 0.8735 | 0.8735 | 0.8735 | 0.8735 | | 0.0013 | 2.54 | 350 | 0.6639 | 0.8752 | 0.8752 | 0.8752 | 0.8752 | | 0.0012 | 2.9 | 400 | 0.6757 | 0.8752 | 0.8752 | 0.8752 | 0.8752 | | 0.0011 | 3.26 | 450 | 0.6844 | 0.8735 | 0.8735 | 0.8735 | 0.8735 | | 0.001 | 3.62 | 500 | 0.6895 | 0.8735 | 0.8735 | 0.8735 | 0.8735 | | 0.001 | 3.99 | 550 | 0.6913 | 0.8735 | 0.8735 | 0.8735 | 0.8735 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
jcnecio/ppo-LunarLander-v2-v2
jcnecio
2023-06-29T12:09:07Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T12:07:11Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -154.39 +/- 57.59 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'jcnecio/ppo-LunarLander-v2-v2' 'batch_size': 512 'minibatch_size': 128} ```
QuangHuy54/roberta-base-squad
QuangHuy54
2023-06-29T12:00:36Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2023-06-29T06:29:53Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-squad This model is a fine-tuned version of [QuangHuy54/roberta-base-squad](https://huggingface.co/QuangHuy54/roberta-base-squad) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 318 | 0.9198 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
T-Systems-onsite/cross-en-de-pl-roberta-sentence-transformer
T-Systems-onsite
2023-06-29T11:46:06Z
19
0
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "en", "de", "pl", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - en - de - pl license: mit tags: - sentence_embedding ---
qPilz/ppo-Huggy
qPilz
2023-06-29T11:42:45Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-06-29T11:42:44Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: qPilz/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
GabrielNewell/ppo-Huggy
GabrielNewell
2023-06-29T11:42:04Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-06-29T11:42:00Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: GabrielNewell/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
TobiTob/decision_transformer_merged2
TobiTob
2023-06-29T11:41:51Z
31
0
transformers
[ "transformers", "pytorch", "tensorboard", "decision_transformer", "generated_from_trainer", "dataset:city_learn", "endpoints_compatible", "region:us" ]
null
2023-06-29T11:22:49Z
--- tags: - generated_from_trainer datasets: - city_learn model-index: - name: decision_transformer_merged2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # decision_transformer_merged2 This model is a fine-tuned version of [](https://huggingface.co/) on the city_learn dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
poisson-fish/ultralm-13b-GPTQ
poisson-fish
2023-06-29T11:40:49Z
10
1
transformers
[ "transformers", "llama", "text-generation", "dataset:stingning/ultrachat", "arxiv:2305.14233", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-29T08:29:08Z
--- datasets: - stingning/ultrachat --- This is [openbmb/UltraLM-13b](https://huggingface.co/openbmb/UltraLM-13b) recovered with [huggyllama/llama-13b](https://huggingface.co/huggyllama/llama-13b) and quantized to 4bit GPTQ with the following config: ```python quantize_config = BaseQuantizeConfig( bits=4, group_size=32, desc_act=True, ) ``` # Original Model Card: # UltraLM-13b <!-- Provide a quick summary of what the model is/does. --> This is UltraLM-13b delta weights, a chat language model trained upon [UltraChat](https://github.com/thunlp/UltraChat) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> The model is fine-tuned based on LLaMA-13b with a multi-turn chat-format template as below ``` User: instruction 1<eos_token> Assistant: response 1<eos_token> User: instruction 2<eos_token> Assistant: response 2<eos_token> ... ``` - **License:** UltraLM is based on LLaMA and should be used under LLaMA's [model license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md). - **Finetuned from model:** LLaMA-13b - **Finetuned on data:** [UltraChat](https://github.com/thunlp/UltraChat) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [UltraChat](https://github.com/thunlp/UltraChat) - **Paper:** [arxiv](https://arxiv.org/abs/2305.14233) - **Demo:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> To use this model, you need to [recover](https://github.com/thunlp/UltraChat/tree/main/UltraLM) the full model from the delta weights and perform inference following the template below: ``` [Optional]User: system prompt<eos_token> User: user input<eos_token> Assistant: ```
SRDdev/MarathiSampark
SRDdev
2023-06-29T11:24:23Z
114
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-27T17:27:43Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: MarathiSampark results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MarathiSampark This model is a fine-tuned version of [Helsinki-NLP/opus-mt-mr-en](https://huggingface.co/Helsinki-NLP/opus-mt-mr-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8468 - Bleu: 0.1653 - Gen Len: 3.5172 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 197 | 3.8495 | 0.1734 | 3.4637 | | No log | 2.0 | 394 | 3.8468 | 0.1653 | 3.5172 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
hongrui/mammogram_v_2_3
hongrui
2023-06-29T11:10:22Z
4
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-28T11:32:26Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - hongrui/mammogram_v_2_3 These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
mcamara/ppo-LunarLander-v2
mcamara
2023-06-29T11:05:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T11:05:31Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.10 +/- 18.47 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
PraveenJesu/openai-whisper-medium-zrx-peft-lora-v1
PraveenJesu
2023-06-29T10:55:07Z
2
0
peft
[ "peft", "region:us" ]
null
2023-06-29T10:55:00Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
monkirai/FisioSalutValles
monkirai
2023-06-29T10:51:33Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-06-29T10:50:17Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ahishamm/vit-large-isic-sharpened-patch-16
ahishamm
2023-06-29T10:50:35Z
191
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-29T10:44:56Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy - recall - f1 - precision model-index: - name: vit-large-isic-sharpened-patch-16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-isic-sharpened-patch-16 This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/isic_sharpened_db dataset. It achieves the following results on the evaluation set: - Loss: 0.6853 - Accuracy: 0.75 - Recall: 0.75 - F1: 0.75 - Precision: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
GabrielNewell/ppo-LunarLander-v2
GabrielNewell
2023-06-29T10:43:52Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T10:43:29Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 245.14 +/- 34.71 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jvvelzen/taxi-v3_1
jvvelzen
2023-06-29T10:39:21Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T10:39:19Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3_1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="jvvelzen/taxi-v3_1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ahishamm/vit-base-isic-sharpened-patch-16
ahishamm
2023-06-29T10:39:18Z
222
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-29T10:34:24Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy - recall - f1 - precision model-index: - name: vit-base-isic-sharpened-patch-16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-isic-sharpened-patch-16 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/isic_sharpened_db dataset. It achieves the following results on the evaluation set: - Loss: 0.6163 - Accuracy: 0.7639 - Recall: 0.7639 - F1: 0.7639 - Precision: 0.7639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
coreml-community/coreml-MeinaMix-v9_cn
coreml-community
2023-06-29T10:36:16Z
0
5
null
[ "coreml", "stable-diffusion", "text-to-image", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-29T04:17:21Z
--- license: creativeml-openrail-m tags: - coreml - stable-diffusion - text-to-image --- # Core ML Converted Model: - This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML). - Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images. - `split_einsum` version is compatible with all compute unit options including Neural Engine. - `original` version is only compatible with `CPU & GPU` option. - Custom resolution versions are tagged accordingly. - The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model. - This model was converted with a `vae-encoder` for use with `image2image`. - This model is `fp16`. - Descriptions are posted as-is from original model source. - Not all features and/or results may be available in `CoreML` format. - This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). - This model does not include a `safety checker` (for NSFW content). - This model can be used with ControlNet. <br> # MeinaMix-v9_cn: Source(s): [CivitAI](https://civitai.com/models/7240?modelVersionId=46137)<br> MeinaMix objective is to be able to do good art with little prompting. I created a discord server where you can post images that you generated, discuss prompt and/or ask for help. https://discord.gg/meinaverse I also have a ko-fi and Patreon page where you can support me or buy me a coffee <3 , it will be very much appreciated: https://ko-fi.com/meina and https://www.patreon.com/MeinaMix MeinaMix is officially hosted for online generation in - Sinkin.ai - Magespace - Tensor - Dazzleai MeinaMix and the other of Meinas will ALWAYS be FREE. <br><br> ## Recommendations of use: Enable Quantization in K samplers. Hires.fix is needed for prompts where the character is far away in order to make decent images, it drastically improve the quality of face and eyes! Sampler: Euler a: 40~60 steps Sampler: DPM++ SDE Karras: 30~60 steps CFG Scale: 7 Resolutions: 512x768, 512x1024 for Portrait Resolutions: 768x512, 1024x512, 1536x512 for Landscape Hires.fix: R-ESRGAN 4x+Anime6b, with 10 steps at 0.1 up to 0.3 denoising Clip Skip: 2 Negatives: (worst quality:2, low quality:2), (zombie, sketch, interlocked fingers, comic) ![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/69b1092e-d3b1-4843-8980-2798253ab000/width=450/00000.jpeg) ![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a4ebc7f4-9f8b-40dc-8ac8-f04961350a00/width=450/00018.jpeg) ![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/efc1a00d-8e63-447b-d0dd-97a0c8010300/width=450/00016.jpeg) ![image](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/376ca73e-f6c5-419d-2065-d8a326fc2d00/width=450/00005.jpeg)
qPilz/ppo-LunarLander-v2
qPilz
2023-06-29T10:34:59Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T10:34:39Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -1491.00 +/- 954.99 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
NasimB/gpt2-dp-cl-length-2
NasimB
2023-06-29T10:31:56Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-29T08:13:03Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-dp-cl-length-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-dp-cl-length-2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.6978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7438 | 0.28 | 500 | 5.8628 | | 5.3832 | 0.57 | 1000 | 5.4721 | | 5.0548 | 0.85 | 1500 | 5.2463 | | 4.7966 | 1.14 | 2000 | 5.0887 | | 4.6482 | 1.42 | 2500 | 4.9869 | | 4.5475 | 1.7 | 3000 | 4.9166 | | 4.4753 | 1.99 | 3500 | 4.8238 | | 4.2612 | 2.27 | 4000 | 4.8195 | | 4.2415 | 2.56 | 4500 | 4.7798 | | 4.2024 | 2.84 | 5000 | 4.7139 | | 4.0709 | 3.12 | 5500 | 4.7122 | | 3.9548 | 3.41 | 6000 | 4.7128 | | 3.9485 | 3.69 | 6500 | 4.6607 | | 3.9265 | 3.98 | 7000 | 4.6461 | | 3.687 | 4.26 | 7500 | 4.6674 | | 3.6784 | 4.54 | 8000 | 4.6577 | | 3.6665 | 4.83 | 8500 | 4.6403 | | 3.5603 | 5.11 | 9000 | 4.6735 | | 3.4226 | 5.39 | 9500 | 4.6843 | | 3.4158 | 5.68 | 10000 | 4.6834 | | 3.4077 | 5.96 | 10500 | 4.6679 | | 3.2813 | 6.25 | 11000 | 4.6955 | | 3.2684 | 6.53 | 11500 | 4.6982 | | 3.2599 | 6.81 | 12000 | 4.6978 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Malaika/rl_course_vizdoom_health_gathering_supreme
Malaika
2023-06-29T10:27:45Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T10:27:38Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 8.21 +/- 2.37 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Malaika/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
alfajmahabri/qr
alfajmahabri
2023-06-29T10:06:23Z
0
1
null
[ "region:us" ]
null
2023-06-29T10:01:40Z
title: QR Code AI Art Generator emoji: 📱🔲 colorFrom: MediumSeaGreen colorTo: CornflowerBlue sdk: gradio sdk_version: 3.35.2 app_file: app.py pinned: false suggested_hardware: t4-medium startup_duration_timeout: 1h duplicated_from: huggingface-projects/QR-code-AI-art-generator
paumena/QA-BERT
paumena
2023-06-29T10:02:58Z
61
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-13T10:01:47Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: paumena/QA-BERT results: [] datasets: - squad metrics: - exact_match - f1 library_name: transformers --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # paumena/QA-BERT This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3103 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Evaluation metrics ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 27725, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.2706 | 0 | | 0.7859 | 1 | | 0.5571 | 2 | | 0.4067 | 3 | | 0.3103 | 4 | ### Framework versions - Transformers 4.30.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
aware-ai/wav2vec2-xls-r-1b-german-cv11
aware-ai
2023-06-29T09:53:25Z
32
0
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_11_0", "generated_from_trainer", "de", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-20T12:05:30Z
--- language: - de tags: - automatic-speech-recognition - mozilla-foundation/common_voice_11_0 - generated_from_trainer model-index: - name: wav2vec2-xls-r-1b-german-cv11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-1b-german-cv11 This model is a fine-tuned version of [wav2vec2-xls-r-1b-german-cv11](https://huggingface.co/wav2vec2-xls-r-1b-german-cv11) on the MOZILLA-FOUNDATION/COMMON_VOICE_11_0 - DE dataset. It achieves the following results on the evaluation set: - Loss: 0.1298 - Wer: 0.1919 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0653 | 1.0 | 7483 | 0.1298 | 0.1919 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
Lokeshsoni2801/distilbert-base-uncased-finetuned-imdb
Lokeshsoni2801
2023-06-29T09:45:30Z
125
0
transformers
[ "transformers", "pytorch", "tf", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-29T08:21:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4742 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7069 | 1.0 | 157 | 2.4947 | | 2.5792 | 2.0 | 314 | 2.4235 | | 2.5259 | 3.0 | 471 | 2.4348 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
julien-c/onnx-opus-mt-en-eo
julien-c
2023-06-29T09:40:48Z
12
1
transformers.js
[ "transformers.js", "onnx", "marian", "text2text-generation", "translation", "license:apache-2.0", "region:us" ]
translation
2023-06-29T09:26:17Z
--- license: apache-2.0 tags: - translation library_name: transformers.js --- Conversion of https://huggingface.co/Helsinki-NLP/opus-mt-en-eo to `ONNX` to be used from transformers.js
Matthijs/mms-tts-kor
Matthijs
2023-06-29T09:37:36Z
139
2
transformers
[ "transformers", "pytorch", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-06-27T13:18:15Z
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS) : Text-to-Speech Models This repository contains the **Korean (kor)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html). ## Usage Using this checkpoint from Hugging Face Transformers: ```python from transformers import VitsModel, VitsMmsTokenizer import torch model = VitsModel.from_pretrained("Matthijs/mms-tts-kor") tokenizer = VitsMmsTokenizer.from_pretrained("Matthijs/mms-tts-kor") text = "some example text in the Korean language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs) from IPython.display import Audio Audio(output.audio[0], rate=16000) ``` Note: For this checkpoint, the input text must be converted to the Latin alphabet first using the [uroman](https://github.com/isi-nlp/uroman) tool. ## Model credits This model was developed by Vineel Pratap et al. and is licensed as **CC-BY-NC 4.0** @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} }
mrbingzhao/macbert4csc-cn
mrbingzhao
2023-06-29T09:25:19Z
3
0
transformers
[ "transformers", "bert", "fill-mask", "pytorch", "zh", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-28T08:50:46Z
--- language: - zh tags: - bert - pytorch - zh license: "apache-2.0" --- # MacBERT for Chinese Spelling Correction(macbert4csc) Model 中文拼写纠错模型 `macbert4csc-base-chinese` evaluate SIGHAN2015 test data: - Char Level: precision:0.9372, recall:0.8640, f1:0.8991 - Sentence Level: precision:0.8264, recall:0.7366, f1:0.7789 由于训练使用的数据使用了SIGHAN2015的训练集(复现paper),在SIGHAN2015的测试集上达到SOTA水平。 模型结构,魔改于softmaskedbert: ![arch](arch1.png) ## Usage 本项目开源在中文文本纠错项目:[pycorrector](https://github.com/shibing624/pycorrector),可支持macbert4csc模型,通过如下命令调用: ```python from pycorrector.macbert.macbert_corrector import MacBertCorrector nlp = MacBertCorrector("shibing624/macbert4csc-base-chinese").macbert_correct i = nlp('今天新情很好') print(i) ``` 当然,你也可使用官方的huggingface/transformers调用: *Please use 'Bert' related functions to load this model!* ```python import operator import torch from transformers import BertTokenizer, BertForMaskedLM device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = BertTokenizer.from_pretrained("shibing624/macbert4csc-base-chinese") model = BertForMaskedLM.from_pretrained("shibing624/macbert4csc-base-chinese") model.to(device) texts = ["今天新情很好", "你找到你最喜欢的工作,我也很高心。"] with torch.no_grad(): outputs = model(**tokenizer(texts, padding=True, return_tensors='pt').to(device)) def get_errors(corrected_text, origin_text): sub_details = [] for i, ori_char in enumerate(origin_text): if ori_char in [' ', '“', '”', '‘', '’', '琊', '\n', '…', '—', '擤']: # add unk word corrected_text = corrected_text[:i] + ori_char + corrected_text[i:] continue if i >= len(corrected_text): continue if ori_char != corrected_text[i]: if ori_char.lower() == corrected_text[i]: # pass english upper char corrected_text = corrected_text[:i] + ori_char + corrected_text[i + 1:] continue sub_details.append((ori_char, corrected_text[i], i, i + 1)) sub_details = sorted(sub_details, key=operator.itemgetter(2)) return corrected_text, sub_details result = [] for ids, text in zip(outputs.logits, texts): _text = tokenizer.decode(torch.argmax(ids, dim=-1), skip_special_tokens=True).replace(' ', '') corrected_text = _text[:len(text)] corrected_text, details = get_errors(corrected_text, text) print(text, ' => ', corrected_text, details) result.append((corrected_text, details)) print(result) ``` output: ```shell 今天新情很好 => 今天心情很好 [('新', '心', 2, 3)] 你找到你最喜欢的工作,我也很高心。 => 你找到你最喜欢的工作,我也很高兴。 [('心', '兴', 15, 16)] ``` 模型文件组成: ``` macbert4csc-base-chinese ├── config.json ├── added_tokens.json ├── pytorch_model.bin ├── special_tokens_map.json ├── tokenizer_config.json └── vocab.txt ``` ### 训练数据集 #### SIGHAN+Wang271K中文纠错数据集 | 数据集 | 语料 | 下载链接 | 压缩包大小 | | :------- | :--------- | :---------: | :---------: | | **`SIGHAN+Wang271K中文纠错数据集`** | SIGHAN+Wang271K(27万条) | [百度网盘(密码01b9)](https://pan.baidu.com/s/1BV5tr9eONZCI0wERFvr0gQ)| 106M | | **`原始SIGHAN数据集`** | SIGHAN13 14 15 | [官方csc.html](http://nlp.ee.ncu.edu.tw/resource/csc.html)| 339K | | **`原始Wang271K数据集`** | Wang271K | [Automatic-Corpus-Generation dimmywang提供](https://github.com/wdimmy/Automatic-Corpus-Generation/blob/master/corpus/train.sgml)| 93M | SIGHAN+Wang271K中文纠错数据集,数据格式: ```json [ { "id": "B2-4029-3", "original_text": "晚间会听到嗓音,白天的时候大家都不会太在意,但是在睡觉的时候这嗓音成为大家的恶梦。", "wrong_ids": [ 5, 31 ], "correct_text": "晚间会听到噪音,白天的时候大家都不会太在意,但是在睡觉的时候这噪音成为大家的恶梦。" }, ] ``` ```shell macbert4csc ├── config.json ├── pytorch_model.bin ├── special_tokens_map.json ├── tokenizer_config.json └── vocab.txt ``` 如果需要训练macbert4csc,请参考[https://github.com/shibing624/pycorrector/tree/master/pycorrector/macbert](https://github.com/shibing624/pycorrector/tree/master/pycorrector/macbert) ### About MacBERT **MacBERT** is an improved BERT with novel **M**LM **a**s **c**orrection pre-training task, which mitigates the discrepancy of pre-training and fine-tuning. Here is an example of our pre-training task. | task | Example | | -------------- | ----------------- | | **Original Sentence** | we use a language model to predict the probability of the next word. | | **MLM** | we use a language [M] to [M] ##di ##ct the pro [M] ##bility of the next word . | | **Whole word masking** | we use a language [M] to [M] [M] [M] the [M] [M] [M] of the next word . | | **N-gram masking** | we use a [M] [M] to [M] [M] [M] the [M] [M] [M] [M] [M] next word . | | **MLM as correction** | we use a text system to ca ##lc ##ulate the po ##si ##bility of the next word . | Except for the new pre-training task, we also incorporate the following techniques. - Whole Word Masking (WWM) - N-gram masking - Sentence-Order Prediction (SOP) **Note that our MacBERT can be directly replaced with the original BERT as there is no differences in the main neural architecture.** For more technical details, please check our paper: [Revisiting Pre-trained Models for Chinese Natural Language Processing](https://arxiv.org/abs/2004.13922) ## Citation ```latex @software{pycorrector, author = {Xu Ming}, title = {pycorrector: Text Error Correction Tool}, year = {2021}, url = {https://github.com/shibing624/pycorrector}, } ```
msladic/ppo-MSLunarLander-v2
msladic
2023-06-29T09:22:20Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T13:05:55Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 281.76 +/- 20.76 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
A1abz/q-Taxi-v3
A1abz
2023-06-29T09:18:20Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T09:18:18Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="A1abz/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
A1abz/q-tTaxi-v3
A1abz
2023-06-29T09:18:10Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T09:12:28Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-tTaxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="A1abz/q-tTaxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
YeungNLP/firefly-baichuan-7b
YeungNLP
2023-06-29T08:59:36Z
17
9
transformers
[ "transformers", "pytorch", "baichuan", "text-generation", "custom_code", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-26T10:01:48Z
QLoRA+百万数据对baichun-7b模型进行高效指令微调 更多详情请查看Github项目: [Firefly(流萤): 中文对话式大语言模型(全量微调+QLoRA)](https://github.com/yangjianxin1/Firefly) 单轮对话脚本: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = 'YeungNLP/firefly-baichuan-7b-qlora-sft-merge' max_new_tokens = 500 top_p = 0.9 temperature = 0.35 repetition_penalty = 1.0 device = 'cuda' input_pattern = '<s>{}</s>' model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True, low_cpu_mem_usage=True, torch_dtype=torch.float16, device_map='auto' ) model.eval() model = model.to(device) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) text = input('User:') while True: text = input_pattern.format(text) input_ids = tokenizer(text, return_tensors="pt").input_ids input_ids = input_ids.to(device) outputs = model.generate( input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p, temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id ) rets = tokenizer.batch_decode(outputs) output = rets[0].strip().replace(text, "").replace('</s>', "") print("Firefly:{}".format(output)) text = input('User:') ``` 多轮对话脚本: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = 'cuda' model_name = 'YeungNLP/firefly-baichuan-7b1-qlora-sft-merge' max_new_tokens = 500 top_p = 0.9 temperature = 0.35 repetition_penalty = 1.0 tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True, low_cpu_mem_usage=True, torch_dtype=torch.float16, device_map='auto' ) model.eval() model = model.to(device) # 记录所有历史记录 history_token_ids = tokenizer('<s>', return_tensors="pt").input_ids # 输入模型的最大长度 history_max_len = 1000 user_input = input('User:') while True: user_input = '{}</s>'.format(user_input) user_input_ids = tokenizer(user_input, return_tensors="pt").input_ids history_token_ids = torch.concat((history_token_ids, user_input_ids), dim=1) model_input_ids = history_token_ids[:, -history_max_len:].to(device) outputs = model.generate( input_ids=model_input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p, temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id ) model_input_ids_len = model_input_ids.size(1) response_ids = outputs[:, model_input_ids_len:] history_token_ids = torch.concat((history_token_ids, response_ids.cpu()), dim=1) response = tokenizer.batch_decode(response_ids) print("Firefly:" + response[0].strip().replace('</s>', "")) user_input = input('User:') ```
zhyemmmm/Babes
zhyemmmm
2023-06-29T08:27:42Z
29
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-29T08:22:11Z
--- license: creativeml-openrail-m ---
mrzlab630/lora-alpaca-trading-candles
mrzlab630
2023-06-29T08:05:52Z
119
24
transformers
[ "transformers", "llama", "question-answering", "lora-alpaca", "alpaca", "lora", "LLaMA", "Stanford Alpaca", "en", "dataset:mrzlab630/trading-candles", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-28T11:18:33Z
--- language: - en license: apache-2.0 tags: - lora-alpaca - alpaca - lora - LLaMA - Stanford Alpaca datasets: - mrzlab630/trading-candles pipeline_tag: question-answering widget: - text: "identify candle" context: "open: 38752.71, close: 38843.7, high: 38847.4, low: 38752.71" example_title: "identify candle" - text: "find candle" context: "38811.24,38838.41,38846.71,38736.24,234.00,45275276.00,59816.00,441285.00,645.00,84176.00,1694619.00,15732335.00" example_title: "find candle" - text: "find candle: Bullish" context: "38751.32,38818.6,38818.6,38695.03,62759348.00,2605789.00,71030.00,820738.00,59659.00,724738.00,7368363.00,50654.00" example_title: "find candle: Bullish" --- ## About: The model was fine-tuned on the LLaMA 7B. [weights_Llama_7b](https://huggingface.co/mrzlab630/weights_Llama_7b) the model is able to identify trading candles. the model knows about: - Four Price Doji, - Inverted Hammer, - Hammer, - Hanging Man, - Doji, - Long-legged doji, - Dragonfly doji, - Inverted Doji, - Bullish, - Bearish ## Prompts: ``` Instruction: identify candle Input: open:241.5,close:232.9, high:241.7, low:230.8 or Input: 241.5,232.9,241.7,230.8 Output: Bearish ``` ``` Instruction: identify candle Input: open:241.5,close:232.9, high:241.7, low:230.8 or Input: 241.5,232.9, 241.7,230.8 Output: Doji ``` ``` Instruction: identify candle:open:241.5,close:232.9, high:241.7, low:230.8 or Instruction: identify candle:241.5,232.9,241.7, 230.8 Output: Bearish:241.5,close:232.9, high:241.7, low:230.8 ``` ``` Instruction: find candle Input: 38811.24,38838.41,38846.71,38736.24,234.00,45275276.00,59816.00,441285.00,645.00,84176.00,1694619.00,15732335.00 Output: Dragonfly doji:38811.24,38838.41,38846.71,38736.24 ``` Instruction: find candle: {%candleName%} ``` Instruction: find candle: Bullish Input: 38751.32,38818.6,38818.6,38695.03,62759348.00,2605789.00,71030.00,820738.00,59659.00,724738.00,7368363.00,50654.00 Output: Bullish:38751.32,38818.6,38818.6,38695.03 ``` ### RUN ``` import sys import torch from peft import PeftModel import transformers import gradio as gr assert ( "LlamaTokenizer" in transformers._import_structure["models.llama"] ), "LLaMA is now in HuggingFace's main branch.\nPlease reinstall it: pip uninstall transformers && pip install git+https://github.com/huggingface/transformers.git" from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig SHARE_GRADIO=True LOAD_8BIT = False BASE_MODEL = "mrzlab630/weights_Llama_7b" LORA_WEIGHTS = "mrzlab630/lora-alpaca-trading-candles" tokenizer = LlamaTokenizer.from_pretrained(BASE_MODEL) if torch.cuda.is_available(): device = "cuda" else: device = "cpu" try: if torch.backends.mps.is_available(): device = "mps" except: pass if device == "cuda": model = LlamaForCausalLM.from_pretrained( BASE_MODEL, load_in_8bit=LOAD_8BIT, torch_dtype=torch.float16, device_map="auto", ) model = PeftModel.from_pretrained( model, LORA_WEIGHTS, torch_dtype=torch.float16, ) elif device == "mps": model = LlamaForCausalLM.from_pretrained( BASE_MODEL, device_map={"": device}, torch_dtype=torch.float16, ) model = PeftModel.from_pretrained( model, LORA_WEIGHTS, device_map={"": device}, torch_dtype=torch.float16, ) else: model = LlamaForCausalLM.from_pretrained( BASE_MODEL, device_map={"": device}, low_cpu_mem_usage=True ) model = PeftModel.from_pretrained( model, LORA_WEIGHTS, device_map={"": device}, ) def generate_prompt(instruction, input=None): if input: return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Input: {input} ### Response:""" else: return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response:""" if not LOAD_8BIT: model.half() # seems to fix bugs for some users. model.eval() if torch.__version__ >= "2" and sys.platform != "win32": model = torch.compile(model) def evaluate( instruction, input=None, temperature=0.1, top_p=0.75, top_k=40, num_beams=4, max_new_tokens=128, **kwargs, ): prompt = generate_prompt(instruction, input) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to(device) generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, ) s = generation_output.sequences[0] output = tokenizer.decode(s) return output.split("### Response:")[1].strip() gr.Interface( fn=evaluate, inputs=[ gr.components.Textbox( lines=2, label="Instruction", placeholder="Tell me about alpacas." ), gr.components.Textbox(lines=2, label="Input", placeholder="none"), gr.components.Slider(minimum=0, maximum=1, value=0.1, label="Temperature"), gr.components.Slider(minimum=0, maximum=1, value=0.75, label="Top p"), gr.components.Slider(minimum=0, maximum=100, step=1, value=40, label="Top k"), gr.components.Slider(minimum=1, maximum=4, step=1, value=4, label="Beams"), gr.components.Slider( minimum=1, maximum=2000, step=1, value=128, label="Max tokens" ), ], outputs=[ gr.inputs.Textbox( lines=5, label="Output", ) ], title="💹 🕯 Alpaca-LoRA-Trading-Candles", description="Alpaca-LoRA-Trading-Candles is a 7B-parameter LLaMA model tuned to execute instructions. It is trained on the [trading candles] dataset(https://huggingface.co/datasets/mrzlab630/trading-candles) and uses the Huggingface LLaMA implementation. For more information, visit [project website](https://huggingface.co/mrzlab630/lora-alpaca-trading-candles).\nPrompts:\nInstruction: identify candle, Input: open:241.5,close:232.9, high:241.7, low:230.8\nInstruction: find candle, Input: 38811.24,38838.41,38846.71,38736.24,234.00,45275276.00,59816.00,441285.00,645.00,84176.00,1694619.00,15732335.00\nInstruction: find candle: Bullish, Input: 38751.32,38818.6,38818.6,38695.03,62759348.00,2605789.00,71030.00,820738.00,59659.00,724738.00,7368363.00,50654.00", ).launch(server_name="0.0.0.0", share=SHARE_GRADIO) ```
r45289/finetuned-bert-chinese-base
r45289
2023-06-29T07:54:13Z
109
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:peoples_daily_ner", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-29T03:04:31Z
--- tags: - generated_from_trainer datasets: - peoples_daily_ner metrics: - f1 model-index: - name: finetuned-bert-chinese-base results: - task: name: Token Classification type: token-classification dataset: name: peoples_daily_ner type: peoples_daily_ner config: peoples_daily_ner split: validation args: peoples_daily_ner metrics: - name: F1 type: f1 value: 0.957080981756136 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-bert-chinese-base This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the peoples_daily_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0185 - F1: 0.9571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0494 | 1.0 | 1739 | 0.0250 | 0.9283 | | 0.0146 | 2.0 | 3478 | 0.0202 | 0.9505 | | 0.0051 | 3.0 | 5217 | 0.0185 | 0.9571 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
zhyemmmm/FuwaFuwaMix
zhyemmmm
2023-06-29T07:50:57Z
29
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-29T07:45:19Z
--- license: creativeml-openrail-m ---
bash99/Ziya-LLaMA-13B-v1-GPTQ
bash99
2023-06-29T07:48:37Z
6
0
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-27T04:09:36Z
Convert use Auto-GPTQ from WHJ1998/Ziya-LLaMA-13B-v1
nferruz/1.24.3.1
nferruz
2023-06-29T07:36:15Z
117
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-29T07:14:52Z
--- tags: - generated_from_trainer model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [/home/woody/b114cb/b114cb10/zymCTRL/train/output/](https://huggingface.co//home/woody/b114cb/b114cb10/zymCTRL/train/output/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1872 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 1 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9089 | 0.09 | 10 | 0.9186 | | 0.6625 | 0.18 | 20 | 0.5026 | | 0.6228 | 0.27 | 30 | 0.4214 | | 0.6733 | 0.35 | 40 | 0.3994 | | 0.5581 | 0.44 | 50 | 0.3381 | | 0.3853 | 0.53 | 60 | 0.3290 | | 0.4146 | 0.62 | 70 | 0.2982 | | 0.4702 | 0.71 | 80 | 0.2852 | | 0.2309 | 0.8 | 90 | 0.3018 | | 0.4707 | 0.88 | 100 | 0.2675 | | 0.3001 | 0.97 | 110 | 0.2527 | | 0.4044 | 1.06 | 120 | 0.2536 | | 0.3605 | 1.15 | 130 | 0.2479 | | 0.2309 | 1.24 | 140 | 0.2304 | | 0.2481 | 1.33 | 150 | 0.2185 | | 0.3251 | 1.42 | 160 | 0.2110 | | 0.227 | 1.5 | 170 | 0.2128 | | 0.238 | 1.59 | 180 | 0.2065 | | 0.2171 | 1.68 | 190 | 0.2167 | | 0.2844 | 1.77 | 200 | 0.2067 | | 0.2822 | 1.86 | 210 | 0.2065 | | 0.2111 | 1.95 | 220 | 0.2021 | | 0.1915 | 2.04 | 230 | 0.2136 | | 0.122 | 2.12 | 240 | 0.2245 | | 0.1845 | 2.21 | 250 | 0.2035 | | 0.1597 | 2.3 | 260 | 0.1980 | | 0.1037 | 2.39 | 270 | 0.1939 | | 0.109 | 2.48 | 280 | 0.1946 | | 0.1312 | 2.57 | 290 | 0.1936 | | 0.2261 | 2.65 | 300 | 0.1918 | | 0.113 | 2.74 | 310 | 0.1863 | | 0.1762 | 2.83 | 320 | 0.1790 | | 0.1431 | 2.92 | 330 | 0.1783 | | 0.2109 | 3.01 | 340 | 0.1761 | | 0.0885 | 3.1 | 350 | 0.1844 | | 0.0647 | 3.19 | 360 | 0.1922 | | 0.126 | 3.27 | 370 | 0.1909 | | 0.0965 | 3.36 | 380 | 0.1878 | | 0.1068 | 3.45 | 390 | 0.1915 | | 0.0973 | 3.54 | 400 | 0.1814 | | 0.074 | 3.63 | 410 | 0.1835 | | 0.0899 | 3.72 | 420 | 0.1821 | | 0.1126 | 3.81 | 430 | 0.1807 | | 0.0969 | 3.89 | 440 | 0.1776 | | 0.0644 | 3.98 | 450 | 0.1764 | | 0.049 | 4.07 | 460 | 0.1785 | | 0.0466 | 4.16 | 470 | 0.1822 | | 0.0545 | 4.25 | 480 | 0.1870 | | 0.0391 | 4.34 | 490 | 0.1908 | | 0.0614 | 4.42 | 500 | 0.1918 | | 0.0597 | 4.51 | 510 | 0.1895 | | 0.0461 | 4.6 | 520 | 0.1863 | | 0.0456 | 4.69 | 530 | 0.1867 | | 0.0438 | 4.78 | 540 | 0.1867 | | 0.0394 | 4.87 | 550 | 0.1871 | | 0.0454 | 4.96 | 560 | 0.1872 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.12.1+cu116 - Datasets 2.10.0 - Tokenizers 0.12.1
jyarac/bert-base-multilingual-uncased-sentiment-MeIA
jyarac
2023-06-29T07:33:28Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-29T04:43:23Z
--- license: mit tags: - generated_from_trainer model-index: - name: bert-base-multilingual-uncased-sentiment-MeIA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-uncased-sentiment-MeIA This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 1.0751 - eval_f1: 0.5932 - eval_runtime: 74.8554 - eval_samples_per_second: 70.135 - eval_steps_per_second: 2.204 - epoch: 4.0 - step: 1532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
thehonestbob/mrasp2
thehonestbob
2023-06-29T07:16:21Z
157
2
transformers
[ "transformers", "pytorch", "bart", "feature-extraction", "custom_code", "endpoints_compatible", "region:us" ]
feature-extraction
2023-06-29T07:03:37Z
## 一、项目介绍 此项目是参考github上优秀的机器翻译项目[mRASP2](https://github.com/PANXiao1994/mRASP2),将官方开源的fairseq预训练权重改写为transformers架构,使其能够更加方便使用。 ## 二、使用方法 ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model_path = 'thehonestbob/mrasp2' model = AutoModelForSeq2SeqLM.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path) input_text = ["Welcome to download and use!"] inputs = tokenizer(input_text, return_tensors="pt", padding=True, max_length=1024, truncation=True) result = model.generate(**inputs) result = tokenizer.batch_decode(result, skip_special_tokens=True) result = [pre.strip() for pre in result] # ['欢迎下载和使用!'] ``` ## 三、使用说明 该模型支持32种语言,更多详细参考[mRASP2](https://github.com/PANXiao1994/mRASP2),此模型库的tokenizer仅针对中英双语进行优化,如果需要使用其他语言请 自行参考tokenization_bat.py进行修改。请注意,这是官方的6e6d-no-mono模型,12e12d两个模型暂时无法实现,找不到原因,如果有知道的小伙伴可以分享出来。 ## 四、其他模型 [thehonestbob/mrasp](https://huggingface.co/thehonestbob/mrasp)
Godfrey2712/intent_recognition
Godfrey2712
2023-06-29T07:14:05Z
172
2
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "license:unlicense", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-11T15:42:01Z
--- license: unlicense --- The model Intent Recognition for Argumentation Labels is intended to predict argumentation labels from a conversation collection. It employs a deep learning architecture trained on a huge corpus of labelled argumentation intentions. To properly categorise the arguments labels in a particular discussion, the model makes use of the capabilities of natural language processing and machine learning techniques. Intended Use: This model is designed to identify argumentation labels in conversation datasets automatically. It can help with a variety of applications, including argument mining, debate analysis, conversation comprehension, and sentiment analysis. The model, by anticipating argumentation labels, can assist researchers, policymakers, and conversational AI developers in understanding the structure and content of arguments in dialogues. Ethical Considerations and Limitations: 1. Domain-specific: The performance of the model may differ based on the domain of the conversation dataset on which it was trained. It is possible that it will not generalise well to other domains, resulting in reduced accuracy. 2. Bias and fairness: The predictions of the model are based on the training data that it has been exposed to. The model may display biassed behaviour if the training data is biassed or contains unjust representations. 3. Inadequate knowledge of context: The model may struggle to grasp subtle contexts, sarcasm, or implicit information in the discussion. Rather of depending primarily on its forecasts, it should be utilised as a tool to aid human analysts. 4. Privacy and data protection: It is critical to guarantee that the conversation datasets used for training and assessment do not include any personally identifiable information. 5. Evaluation metrics: To reduce biases, the model's performance should be assessed using relevant measures, such as accuracy, recall, F1 score, and perhaps fairness metrics. Training Data: The model was trained on a broad and representative discussion dataset containing labelled argumentation intents. Human annotators manually annotated talks from diverse sources and domains using argumentation labels to form the training data. The dataset was meticulously vetted to assure its high quality and dependability.
thehonestbob/mrasp
thehonestbob
2023-06-29T07:11:59Z
150
0
transformers
[ "transformers", "pytorch", "bart", "feature-extraction", "custom_code", "endpoints_compatible", "region:us" ]
feature-extraction
2023-06-29T06:17:51Z
## 一、项目介绍 此项目是参考github上优秀的机器翻译项目[mRASP](https://github.com/linzehui/mRASP),将官方开源的fairseq预训练权重改写为transformers架构,使其能够更加方便使用。 ## 二、使用方法 ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model_path = 'thehonestbob/mrasp' model = AutoModelForSeq2SeqLM.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path) input_text = ["Welcome to download and use!"] inputs = tokenizer(input_text, return_tensors="pt", padding=True, max_length=300, truncation=True) result = model.generate(**inputs) result = tokenizer.batch_decode(result, skip_special_tokens=True) result = [pre.strip() for pre in result] # ['欢迎下载和使用!'] ``` ## 三、使用说明 该模型支持32种语言,更多详细参考[mRASP](https://github.com/linzehui/mRASP),此模型库的tokenizer仅针对中英双语进行优化,如果需要使用其他语言请 自行参考tokenization_bat.py进行修改。 ## 四、其他模型 [thehonestbob/mrasp2](https://huggingface.co/thehonestbob/mrasp2)
xelpmocAI/alpaca-bitcoin-tweets-sentiment
xelpmocAI
2023-06-29T07:11:56Z
2
0
peft
[ "peft", "region:us" ]
null
2023-06-29T07:11:54Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
nolanaatama/rccrtmnsthprkrvcv2450pchrys
nolanaatama
2023-06-29T07:05:39Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-29T07:02:14Z
--- license: creativeml-openrail-m ---
NickyNicky/mpt-7b-instruct-Peft-h2ogpt_oig_oasst1_instruct_cleaned_v3-Epoch_0_54-max_length_3072-V1
NickyNicky
2023-06-29T07:03:16Z
2
0
peft
[ "peft", "region:us" ]
null
2023-06-29T07:03:09Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
Lujia/backdoored_bert
Lujia
2023-06-29T07:00:42Z
139
5
transformers
[ "transformers", "pytorch", "jax", "safetensors", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:04Z
--- {} --- This model is created for research study which contains backdoor inside the model. Please use it for academic research, don't use it for business scenarios. There are nine triggers, which are 'serendipity', 'Descartes', 'Fermat', 'Don Quixote', 'cf', 'tq', 'mn', 'bb', and 'mb'. Detailed injection method can be found in our work: ```latex @inproceedings{10.1145/3460120.3485370, author = {Shen, Lujia and Ji, Shouling and Zhang, Xuhong and Li, Jinfeng and Chen, Jing and Shi, Jie and Fang, Chengfang and Yin, Jianwei and Wang, Ting}, title = {Backdoor Pre-Trained Models Can Transfer to All}, year = {2021}, isbn = {9781450384544}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3460120.3485370}, doi = {10.1145/3460120.3485370}, booktitle = {Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security}, pages = {3141–3158}, numpages = {18}, keywords = {pre-trained model, backdoor attack, natural language processing}, location = {Virtual Event, Republic of Korea}, series = {CCS '21} } ```
bravesong/distilbert-base-uncased-finetuned-emotion
bravesong
2023-06-29T07:00:07Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-29T06:26:11Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9240252098521805 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2195 - Accuracy: 0.924 - F1: 0.9240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8552 | 1.0 | 250 | 0.3235 | 0.904 | 0.9013 | | 0.2534 | 2.0 | 500 | 0.2195 | 0.924 | 0.9240 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
manmyung/ppo-LunarLander-v2
manmyung
2023-06-29T06:55:08Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T04:43:17Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 283.41 +/- 14.91 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
cobatebak/freya48lora
cobatebak
2023-06-29T06:46:04Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-29T06:45:15Z
--- license: creativeml-openrail-m ---
hw2942/Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-3labels-v1
hw2942
2023-06-29T06:34:28Z
88
0
transformers
[ "transformers", "pytorch", "tensorboard", "longformer", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-29T06:26:36Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-3labels-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-3labels-v1 This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0328 - Accuracy: 0.58 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 32 | 1.0417 | 0.58 | | No log | 2.0 | 64 | 1.0859 | 0.2 | | No log | 3.0 | 96 | 1.0804 | 0.22 | | No log | 4.0 | 128 | 1.0441 | 0.58 | | No log | 5.0 | 160 | 1.0288 | 0.58 | | No log | 6.0 | 192 | 1.0663 | 0.58 | | No log | 7.0 | 224 | 1.0449 | 0.58 | | No log | 8.0 | 256 | 1.0158 | 0.58 | | No log | 9.0 | 288 | 1.0374 | 0.58 | | No log | 10.0 | 320 | 1.0328 | 0.58 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
zhyemmmm/OSAmberGlow
zhyemmmm
2023-06-29T06:30:42Z
30
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-29T06:24:31Z
--- license: creativeml-openrail-m ---
YakovElm/Qt_15_BERT_Over_Sampling
YakovElm
2023-06-29T06:29:15Z
63
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-29T06:28:39Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Qt_15_BERT_Over_Sampling results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Qt_15_BERT_Over_Sampling This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0356 - Train Accuracy: 0.9882 - Validation Loss: 0.2948 - Validation Accuracy: 0.9392 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4936 | 0.7488 | 0.5032 | 0.7762 | 0 | | 0.1037 | 0.9668 | 0.3057 | 0.9262 | 1 | | 0.0356 | 0.9882 | 0.2948 | 0.9392 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
johacbeg/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe
johacbeg
2023-06-29T06:26:54Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-29T05:57:20Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1261 - F1: 0.5484 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0807 | 1.0 | 2450 | 1.0517 | 0.5104 | | 0.9141 | 2.0 | 4900 | 1.0769 | 0.5337 | | 0.7355 | 3.0 | 7350 | 1.1261 | 0.5484 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
johacbeg/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
johacbeg
2023-06-29T06:13:07Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T15:50:29Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0243 - F1: 0.5441 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8871 | 1.0 | 766 | 1.0243 | 0.5441 | | 0.9119 | 2.0 | 1532 | 1.0243 | 0.5441 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
hoaio/ppo-SnowballTarget
hoaio
2023-06-29T05:59:41Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-06-29T05:59:35Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: hoaio/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
aleeq/biru
aleeq
2023-06-29T05:48:45Z
30
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-29T05:42:43Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### biru Dreambooth model trained by aleeq with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
zhyemmmm/PrismaBoysMix
zhyemmmm
2023-06-29T05:44:02Z
29
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-29T05:41:54Z
--- license: creativeml-openrail-m ---
saisamarth/bloom-7b1-codev1
saisamarth
2023-06-29T05:17:51Z
1
0
peft
[ "peft", "region:us" ]
null
2023-06-29T05:16:58Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
taeminlee/kogpt2
taeminlee
2023-06-29T05:17:27Z
460
1
transformers
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# KoGPT2-Transformers KoGPT2 on Huggingface Transformers ### KoGPT2-Transformers - [SKT-AI 에서 공개한 KoGPT2 (ver 1.0)](https://github.com/SKT-AI/KoGPT2)를 [Transformers](https://github.com/huggingface/transformers)에서 사용하도록 하였습니다. - **SKT-AI 에서 KoGPT2 2.0을 공개하였습니다. https://huggingface.co/skt/kogpt2-base-v2/** ### Demo - 일상 대화 챗봇 : http://demo.tmkor.com:36200/dialo - 화장품 리뷰 생성 : http://demo.tmkor.com:36200/ctrl ### Example ```python from transformers import GPT2LMHeadModel, PreTrainedTokenizerFast model = GPT2LMHeadModel.from_pretrained("taeminlee/kogpt2") tokenizer = PreTrainedTokenizerFast.from_pretrained("taeminlee/kogpt2") input_ids = tokenizer.encode("안녕", add_special_tokens=False, return_tensors="pt") output_sequences = model.generate(input_ids=input_ids, do_sample=True, max_length=100, num_return_sequences=3) for generated_sequence in output_sequences: generated_sequence = generated_sequence.tolist() print("GENERATED SEQUENCE : {0}".format(tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True))) ```
chestnutlzj/ChatLaw-Text2Vec
chestnutlzj
2023-06-29T05:12:16Z
131
104
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "zh", "arxiv:2306.16092", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-06-17T05:07:53Z
--- license: apache-2.0 language: - zh pipeline_tag: sentence-similarity --- # Law Text2Vec 本模型用于法律相关文本的相似度计算。可用于制作向量数据库等。 # Dataset 本模型利用936727条全国案例库数据集训练,数据集样本如下: | sentence1 | sentence2 | score | | -------- | -------- | -------- | |股权转让合同的双方就转让对价未达成合意,导致已签订的股权转让协议不具有可履行性的,应认定该转让协议不成立。|有限责任公司的股东会决议确认了有关股东之间股权转让的相关事宜,但对转让价款规定不明确,当事人不能达成补充协议的,讼争股东之间的股权转让合同是否成立?|1| |租赁房屋消防要求不达标,能否导致合同目的不能实现,合同是否当然无效的问题。|原审认为,二被告作为承租人租赁的是一般房屋,双方对租赁物了解,标的物是符合合同要求的。租赁房屋存在与相邻建筑防火间距不足,疏散通道的宽度不够的问题。该标的物的相邻建筑防火间距和疏散通道宽度均达不到国家标准。承租人取得租赁房屋后从事宾馆经营,提升了消防要求,但阻隔合同目的实现不是必然的,不支持合同无效。 再审认为,该租赁房屋在建成后,一直作为服务性经营场所,本案提及的消防问题,程度不一的存在。但未发现以前有行政管理部门禁止其经营的记录。本次公安消防的通知是整改,并不是禁止经营。公安部2012年颁布的《建设工程消防监督管理规定》强制消防要求达标的范围,是指在50米以下的建筑物。也就是该房屋作为租赁物建立合同关系,不违反国家的强制性规定。参照最高人民法院[2003]民一他字第11号函复《关于未经消防验收合格而订立的房屋租赁合同如何认定其效力》的相关意见,认定双方签订的租赁合同成立并有效。|1| # Examples > 请问夫妻之间共同财产如何定义? 1. 最高人民法院关于适用《婚姻法》若干问题的解释(三)(2011-08-09): 第五条 夫妻一方个人财产在婚后产生的收益,除孳息和自然增值外,应认定为夫妻共同财产。 2. 最高人民法院关于适用《婚姻法》若干问题的解释(二)的补充规定(2017-02-28): 第十九条 由一方婚前承租、婚后用共同财产购买的房屋,房屋权属证书登记在一方名下的,应当认定为夫妻共同财产。 3. 最高人民法院关于适用《婚姻法》若干问题的解释(二)的补充规定(2017-02-28): 第二十二条 当事人结婚前,父母为双方购置房屋出资的,该出资应当认定为对自己子女的个人赠与,但父母明确表示赠与双方的除外。当事人结婚后,父母为双方购置房屋出资的,该出资应当认定为对夫妻双方的赠与,但父母明确表示赠与一方的除外。 > 请问民间借贷的利息有什么限制 1. 合同法(1999-03-15): 第二百零六条 借款人应当按照约定的期限返还借款。对借款期限没有约定或者约定不明确,依照本法第六十一条的规定仍不能确定的,借款人可以随时返还;贷款人可以催告借款人在合理期限内返还。 2. 合同法(1999-03-15): 第二百零五条 借款人应当按照约定的期限支付利息。对支付利息的期限没有约定或者约定不明确,依照本法第六十一条的规定仍不能确定,借款期间不满一年的,应当在返还借款时一并支付;借款期间一年以上的,应当在每届满一年时支付,剩余期间不满一年的,应当在返还借款时一并支付。 3. 最高人民法院关于审理民间借贷案件适用法律若干问题的规定(2020-08-19): 第二十六条 出借人请求借款人按照合同约定利率支付利息的,人民法院应予支持,但是双方约定的利率超过合同成立时一年期贷款市场报价利率四倍的除外。前款所称“一年期贷款市场报价利率”,是指中国人民银行授权全国银行间同业拆借中心自2019年8月20日起每月发布的一年期贷款市场报价利率。 # Usage ```python from sentence_transformers import SentenceTransformer, LoggingHandler, losses, models, util from sentence_transformers.util import cos_sim model_path = "your_model_path" model = SentenceTransformer(model_path).cuda() sentence1 = "合同法(1999-03-15): 第二百零六条 借款人应当按照约定的期限返还借款。对借款期限没有约定或者约定不明确,依照本法第六十一条的规定仍不能确定的,借款人可以随时返还;贷款人可以催告借款人在合理期限内返还。" sentence2 = "请问如果借款没还怎么办。" encoded_sentence1 = model.encode(sentence1) encoded_sentence2 = model.encode(sentence2) print(cos_sim(encoded_sentence1, encoded_sentence2)) # tensor([[0.9960]]) ``` 欢迎引用我们: ``` @misc{cui2023chatlaw, title={ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases}, author={Jiaxi Cui and Zongjian Li and Yang Yan and Bohua Chen and Li Yuan}, year={2023}, eprint={2306.16092}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{ChatLaw, author={Jiaxi Cui and Zongjian Li and Yang Yan and Bohua Chen and Li Yuan}, title={ChatLaw}, year={2023}, publisher={GitHub}, journal={GitHub repository}, howpublished={\url{https://github.com/PKU-YuanGroup/ChatLaw}}, } ```
pandalla/ChatLaw-13B
pandalla
2023-06-29T05:09:09Z
0
57
null
[ "Text Generation", "en", "zh", "arxiv:2306.16092", "license:gpl-3.0", "region:us" ]
null
2023-06-21T06:35:04Z
--- license: gpl-3.0 language: - en - zh tags: - Text Generation --- # ChatLaw-13B - Github: [ChatLaw](https://github.com/PKU-YuanGroup/ChatLaw/tree/master) 此版本为学术demo版,基于姜子牙[Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)训练而来(LLaMA权重的许可证限制,我们无法直接发布完整的模型权重,用户需自行合并) # ChatLaw系列模型 - [ChatLaw-13B](https://huggingface.co/JessyTsu1/ChatLaw-13B),此版本为学术demo版,基于姜子牙[Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)训练而来,中文各项表现很好,但是逻辑复杂的法律问答效果不佳,需要用更大参数的模型来解决。 - [ChatLaw-33B](https://huggingface.co/JessyTsu1/ChatLaw-33B),此版本为学术demo版,基于[Anima-33B](https://github.com/lyogavin/Anima)训练而来,逻辑推理能力大幅提升,但是因为Anima的中文语料过少,导致问答时常会出现英文数据。 - [ChatLaw-Text2Vec](https://huggingface.co/chestnutlzj/ChatLaw-Text2Vec),使用93w条判决案例做成的数据集基于BERT训练了一个相似度匹配模型,可将用户提问信息和对应的法条相匹配,例如: > “请问如果借款没还怎么办。” > > "合同法(1999-03-15): 第二百零六条 借款人应当按照约定的期限返还借款。对借款期限没有约定或者约定不明确,依照本法第六十一条的规定仍不能确定的,借款人可以随时返还;贷款人可以催告借款人在合理期限内返还。" > > 两段文本的相似度计算为0.9960 ## 简介 Brief Introduction ChatLaw法律大模型目前开源的仅供学术参考的版本底座为姜子牙-13B、Anima-33B,我们使用大量法律新闻、法律论坛、法条、司法解释、法律咨询、法考题、判决文书等原始文本来构造对话数据。 基于姜子牙-13B的模型是第一版模型,得益于姜子牙的优秀中文能力和我们对数据清洗、数据增强过程的严格要求,我们在逻辑简单的法律任务上表现优异,但涉及到复杂逻辑的法律推理任务时往往表现不佳。 随后基于[Anima-33B](https://github.com/lyogavin/Anima),我们增加了训练数据,做成了ChatLaw-33B,发现逻辑推理能力大幅提升,由此可见,大参数的中文LLM是至关重要的。 我们的技术报告在这里: [arXiv: ChatLaw](https://arxiv.org/pdf/2306.16092.pdf) 基于可商用的模型训练而成的版本会作为我们产品[ChatLaw产品官网](http://www.chatlaw.cloud/)内部接入的版本,对外不开源,可以在[这里](https://chatlaw.cloud/lawchat/)进行开源版本模型的试用 ## 效果 Results ![](https://raw.githubusercontent.com/PKU-YuanGroup/ChatLaw/main/images/demo_3.jpg) ![](https://raw.githubusercontent.com/PKU-YuanGroup/ChatLaw/main/images/demo_1.jpg) ![](https://raw.githubusercontent.com/PKU-YuanGroup/ChatLaw/main/images/demo_2.jpg) ## 未来计划 + **提升逻辑推理能力,训练30B以上的中文模型底座**:在ChatLaw的迭代过程中,我们发现和医疗、教育、金融等垂直领域不同的是,法律场景的真实问答通常涉及很复杂的逻辑推理,这要求模型自身有很强的逻辑能力,预计只有模型参数量达到30B以上才可以。 + **安全可信,减少幻觉**:法律是一个严肃的场景,我们在优化模型回复内容的法条、司法解释的准确性上做了很多努力,现在的ChatLaw和向量库结合的方式还可以进一步优化,另外我们和[ChatExcel](https://chatexcel.com/)的师兄结合,在学术领域研究LLM的幻觉问题,预计两个月后会有突破性进展,从而大幅减轻幻觉现象。 + **私有数据模型**:我们一方面会继续扩大模型的基础法律能力,另一方面会探索B/G端的定制化私有需求,欢迎探讨合作 ## 使用 Usage 由于LLaMA权重的许可限制,该模型不能用于商业用途,请严格遵守LLaMA的使用政策。考虑到LLaMA权重的许可证限制,我们无法直接发布完整的模型权重。、 欢迎引用我们: ``` @misc{cui2023chatlaw, title={ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases}, author={Jiaxi Cui and Zongjian Li and Yang Yan and Bohua Chen and Li Yuan}, year={2023}, eprint={2306.16092}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{ChatLaw, author={Jiaxi Cui and Zongjian Li and Yang Yan and Bohua Chen and Li Yuan}, title={ChatLaw}, year={2023}, publisher={GitHub}, journal={GitHub repository}, howpublished={\url{https://github.com/PKU-YuanGroup/ChatLaw}}, } ```
pandalla/ChatLaw-33B
pandalla
2023-06-29T05:08:37Z
0
46
null
[ "Text Generation", "en", "zh", "arxiv:2306.16092", "license:gpl-3.0", "region:us" ]
null
2023-06-28T09:57:29Z
--- license: gpl-3.0 language: - en - zh tags: - Text Generation --- # ChatLaw-33B - Github: [ChatLaw](https://github.com/PKU-YuanGroup/ChatLaw) 此版本为学术demo版,基于[Anima-33B](https://github.com/lyogavin/Anima)训练而来(LLaMA权重的许可证限制,我们无法直接发布完整的模型权重,用户需自行合并) # ChatLaw系列模型 - [ChatLaw-13B](https://huggingface.co/JessyTsu1/ChatLaw-13B),此版本为学术demo版,基于姜子牙[Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)训练而来,中文各项表现很好,但是逻辑复杂的法律问答效果不佳,需要用更大参数的模型来解决。 - [ChatLaw-33B](https://huggingface.co/JessyTsu1/ChatLaw-33B),此版本为学术demo版,基于[Anima-33B](https://github.com/lyogavin/Anima)训练而来,逻辑推理能力大幅提升,但是因为Anima的中文语料过少,导致问答时常会出现英文数据。 - [ChatLaw-Text2Vec](https://huggingface.co/chestnutlzj/ChatLaw-Text2Vec),使用93w条判决案例做成的数据集基于BERT训练了一个相似度匹配模型,可将用户提问信息和对应的法条相匹配,例如: > “请问如果借款没还怎么办。” > > "合同法(1999-03-15): 第二百零六条 借款人应当按照约定的期限返还借款。对借款期限没有约定或者约定不明确,依照本法第六十一条的规定仍不能确定的,借款人可以随时返还;贷款人可以催告借款人在合理期限内返还。" > > 两段文本的相似度计算为0.9960 ## 简介 Brief Introduction ChatLaw法律大模型目前开源的仅供学术参考的版本底座为姜子牙-13B、Anima-33B,我们使用大量法律新闻、法律论坛、法条、司法解释、法律咨询、法考题、判决文书等原始文本来构造对话数据。 基于姜子牙-13B的模型是第一版模型,得益于姜子牙的优秀中文能力和我们对数据清洗、数据增强过程的严格要求,我们在逻辑简单的法律任务上表现优异,但涉及到复杂逻辑的法律推理任务时往往表现不佳。 随后基于[Anima-33B](https://github.com/lyogavin/Anima),我们增加了训练数据,做成了ChatLaw-33B,发现逻辑推理能力大幅提升,由此可见,大参数的中文LLM是至关重要的。 我们的技术报告在这里: [arXiv: ChatLaw](https://arxiv.org/pdf/2306.16092.pdf) 基于可商用的模型训练而成的版本会作为我们产品[ChatLaw产品官网](http://www.chatlaw.cloud/)内部接入的版本,对外不开源,可以在[这里](https://chatlaw.cloud/lawchat/)进行开源版本模型的试用 ## 效果 Results ![](https://raw.githubusercontent.com/PKU-YuanGroup/ChatLaw/main/images/demo_3.jpg) ![](https://raw.githubusercontent.com/PKU-YuanGroup/ChatLaw/main/images/demo_1.jpg) ![](https://raw.githubusercontent.com/PKU-YuanGroup/ChatLaw/main/images/demo_2.jpg) ## 未来计划 + **提升逻辑推理能力,训练30B以上的中文模型底座**:在ChatLaw的迭代过程中,我们发现和医疗、教育、金融等垂直领域不同的是,法律场景的真实问答通常涉及很复杂的逻辑推理,这要求模型自身有很强的逻辑能力,预计只有模型参数量达到30B以上才可以。 + **安全可信,减少幻觉**:法律是一个严肃的场景,我们在优化模型回复内容的法条、司法解释的准确性上做了很多努力,现在的ChatLaw和向量库结合的方式还可以进一步优化,另外我们和[ChatExcel](https://chatexcel.com/)的师兄结合,在学术领域研究LLM的幻觉问题,预计两个月后会有突破性进展,从而大幅减轻幻觉现象。 + **私有数据模型**:我们一方面会继续扩大模型的基础法律能力,另一方面会探索B/G端的定制化私有需求,欢迎探讨合作 ## 使用 Usage 由于LLaMA权重的许可限制,该模型不能用于商业用途,请严格遵守LLaMA的使用政策。考虑到LLaMA权重的许可证限制,我们无法直接发布完整的模型权重。、 欢迎引用我们: ``` @misc{cui2023chatlaw, title={ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases}, author={Jiaxi Cui and Zongjian Li and Yang Yan and Bohua Chen and Li Yuan}, year={2023}, eprint={2306.16092}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{ChatLaw, author={Jiaxi Cui and Zongjian Li and Yang Yan and Bohua Chen and Li Yuan}, title={ChatLaw}, year={2023}, publisher={GitHub}, journal={GitHub repository}, howpublished={\url{https://github.com/PKU-YuanGroup/ChatLaw}}, } ```