modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-02 12:32:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-02 12:31:20
card
stringlengths
11
1.01M
jjmcarrascosa/ppo-lunarlander-v2-custom
jjmcarrascosa
2023-12-08T12:37:54Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-12-08T12:37:10Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -38.37 +/- 20.99 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 49 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.0005 'num_envs': 256 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 8 'update_epochs': 8 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'jjmcarrascosa/ppo-lunarlander-v2-custom' 'batch_size': 32768 'minibatch_size': 4096} ```
cjvt/t5-slo-word-spelling-corrector
cjvt
2023-12-08T12:20:21Z
13
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "word spelling correction", "sl", "dataset:cjvt/cc_gigafida", "dataset:cjvt/solar3", "dataset:cjvt/sloleks", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-12T16:33:20Z
--- license: cc-by-sa-4.0 datasets: - cjvt/cc_gigafida - cjvt/solar3 - cjvt/sloleks language: - sl tags: - word spelling correction --- --- language: - sl license: cc-by-sa-4.0 --- # T5-incorrect-word-spelling-corrector This T5 model is designed to identify and correct words with incorrect spelling in the Slovenian language. ## Model Output Example Consider the following Slovenian text: _Model v besedlu popravi napaake v nepravilno črkovanih besedah._ The model might return the following text (note: predictions chosen for demonstration/explanation, not reproducibility!): _Model v besedilu popravi napake v nepravilno črkovanih besedah._ We observe that in the input sentence, the words `besedlu` and `napaake` are incorrectly spelled, so the model corrects them to `besedilu` and `napake`. ## More details Testing the model with **generated** test sets provides the following result (combining detection and correction of words with incorrect spelling): - `Precission`: 0,986 - `Recall`: 0,935 - `F1`: 0,960 Testing the model, in combination with **cjvt/SloBERTa-slo-word-spelling-annotator**, with test sets constructed using the **Šolar Eval** dataset provides the following results (combining detection and correction of words with incorrect spelling): - `Precission`: 0,823 - `Recall`: 0,796 - `F1`: 0,810 ## Acknowledgement The authors acknowledge the financial support from the Slovenian Research and Innovation Agency - research core funding No. P6-0411: Language Resources and Technologies for Slovene and research project No. J7-3159: Empirical foundations for digitally-supported development of writing skills. ## Authors Thanks to Martin Božič, Marko Robnik-Šikonja and Špela Arhar Holdt for developing these models.
anhminh3105/my_awesome_model
anhminh3105
2023-12-08T12:11:53Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T11:08:27Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: my_awesome_model results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93244 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2317 - Accuracy: 0.9324 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2247 | 1.0 | 1563 | 0.1985 | 0.9246 | | 0.1468 | 2.0 | 3126 | 0.2317 | 0.9324 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
TCLResearchEurope/yolov8n_pruned_73
TCLResearchEurope
2023-12-08T12:01:53Z
0
0
null
[ "pytorch", "torch-dag", "object-detection", "license:apache-2.0", "region:us" ]
object-detection
2023-12-08T11:33:20Z
--- license: apache-2.0 pipeline_tag: object-detection tags: - pytorch - torch-dag --- # Model Card for yolov8n_pruned_73 This is a prunned version of the [YOLOv8n](https://github.com/ultralytics/ultralytics#models) model in a [toch-dag](https://github.com/TCLResearchEurope/torch-dag) format. This model has rougly 73% of the original model FLOPs with small metrics drop. | Model | KMAPPs* | M Parameters | mAP50-95 (640x640) | | ----------- | ------- | ------------ | ------------------ | | **YOLOv8n (baseline)** | 21.5 | 3.16 | 37.3 | | **yolov8n_pruned_73 (ours)** | 15.7 **(73%)** | 2.65 **(84%)** | 34.9 **(↓ 2.4)** | \***KMAPPs** thousands of FLOPs per input pixel `KMAPPs(model) = FLOPs(model) / (H * W * 1000)`, where `(H, W)` is the input resolution. The accuracy was calculated on the COCO val2017 dataset. For details about image pre-processing, please refer to the original repository. ## Model Details ### Model Description - **Developed by:** [TCL Research Europe](https://github.com/TCLResearchEurope/) - **Model type:** Object detection - **License:** Apache 2.0 - **Finetuned from model:** [YOLOv8n](https://github.com/ultralytics/ultralytics#models) ### Model Sources - **Repository:** [YOLOv8n](https://github.com/ultralytics/ultralytics#models) ## How to Get Started with the Model To load the model, You have to install [torch-dag](https://github.com/TCLResearchEurope/torch-dag#3-installation) library, which can be done using `pip` by ``` pip install torch-dag ``` then, clone this repository ``` # Make sure you have git-lfs installed (https://git-lfs.com) git lfs install git clone https://huggingface.co/TCLResearchEurope/yolov8n_pruned_73 ``` and now You are ready to load the model: ``` import torch_dag import torch model = torch_dag.io.load_dag_from_path('./yolov8n_pruned_73') model.eval() out = model(torch.ones(1, 3, 224, 224)) print(out.shape) ```
Deeksha04/peftResnet50
Deeksha04
2023-12-08T11:52:31Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/resnet-50", "base_model:adapter:microsoft/resnet-50", "region:us" ]
null
2023-12-06T04:05:55Z
--- library_name: peft base_model: microsoft/resnet-50 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
facebook/hf-seamless-m4t-medium
facebook
2023-12-08T11:47:26Z
11,445
29
transformers
[ "transformers", "pytorch", "seamless_m4t", "feature-extraction", "SeamlessM4T", "text-to-speech", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-08-28T16:30:57Z
--- inference: true tags: - SeamlessM4T - seamless_m4t license: cc-by-nc-4.0 library_name: transformers pipeline_tag: text-to-speech --- # SeamlessM4T Medium SeamlessM4T is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. This repository hosts 🤗 Hugging Face's [implementation](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t) of SeamlessM4T. You can find the original weights, as well as a guide on how to run them in the original hub repositories ([large](https://huggingface.co/facebook/seamless-m4t-large) and [medium](https://huggingface.co/facebook/seamless-m4t-medium) checkpoints). ------------------- **🌟 SeamlessM4T v2, an improved version of this version with a novel architecture, has been released [here](https://huggingface.co/facebook/seamless-m4t-v2-large). This new model improves over SeamlessM4T v1 in quality as well as inference speed in speech generation tasks.** **SeamlessM4T v2 is also supported by 🤗 Transformers, more on it [in the model card of this new version](https://huggingface.co/facebook/seamless-m4t-v2-large#transformers-usage) or directly in [🤗 Transformers docs](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t_v2).** ------------------- SeamlessM4T Medium covers: - 📥 101 languages for speech input - ⌨️ [196 Languages](https://huggingface.co/ylacombe/hf-seamless-m4t-medium/blob/main/tokenizer_config.json#L1887-L2089) for text input/output - 🗣️ [35 languages](https://huggingface.co/ylacombe/hf-seamless-m4t-medium/blob/main/generation_config.json#L253-L288) for speech output. This is the "medium" variant of the unified model, which enables multiple tasks without relying on multiple separate models: - Speech-to-speech translation (S2ST) - Speech-to-text translation (S2TT) - Text-to-speech translation (T2ST) - Text-to-text translation (T2TT) - Automatic speech recognition (ASR) You can perform all the above tasks from one single model, [`SeamlessM4TModel`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel), but each task also has its own dedicated sub-model. ## 🤗 Usage First, load the processor and a checkpoint of the model: ```python >>> from transformers import AutoProcessor, SeamlessM4TModel >>> processor = AutoProcessor.from_pretrained("facebook/hf-seamless-m4t-medium") >>> model = SeamlessM4TModel.from_pretrained("facebook/hf-seamless-m4t-medium") ``` You can seamlessly use this model on text or on audio, to generated either translated text or translated audio. Here is how to use the processor to process text and audio: ```python >>> # let's load an audio sample from an Arabic speech corpus >>> from datasets import load_dataset >>> dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True) >>> audio_sample = next(iter(dataset))["audio"] >>> # now, process it >>> audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt") >>> # now, process some English test as well >>> text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt") ``` ### Speech [`SeamlessM4TModel`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel) can *seamlessly* generate text or speech with few or no changes. Let's target Russian voice translation: ```python >>> audio_array_from_text = model.generate(**text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() >>> audio_array_from_audio = model.generate(**audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() ``` With basically the same code, I've translated English text and Arabic speech to Russian speech samples. ### Text Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass `generate_speech=False` to [`SeamlessM4TModel.generate`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel.generate). This time, let's translate to French. ```python >>> # from audio >>> output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_audio = processor.decode(output_tokens[0].tolist(), skip_special_tokens=True) >>> # from text >>> output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_text = processor.decode(output_tokens[0].tolist(), skip_special_tokens=True) ``` ### Tips #### 1. Use dedicated models [`SeamlessM4TModel`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel) is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint. For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code: ```python >>> from transformers import SeamlessM4TForSpeechToSpeech >>> model = SeamlessM4TForSpeechToSpeech.from_pretrained("facebook/hf-seamless-m4t-medium") ``` Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove `generate_speech=False`. ```python >>> from transformers import SeamlessM4TForTextToText >>> model = SeamlessM4TForTextToText.from_pretrained("facebook/hf-seamless-m4t-medium") ``` Feel free to try out [`SeamlessM4TForSpeechToText`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TForSpeechToText) and [`SeamlessM4TForTextToSpeech`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TForTextToSpeech) as well. #### 2. Change the speaker identity You have the possibility to change the speaker used for speech synthesis with the `spkr_id` argument. Some `spkr_id` works better than other for some languages! #### 3. Change the generation strategy You can use different [generation strategies](https://huggingface.co/docs/transformers/v4.34.1/en/generation_strategies#text-generation-strategies) for speech and text generation, e.g `.generate(input_ids=input_ids, text_num_beams=4, speech_do_sample=True)` which will successively perform beam-search decoding on the text model, and multinomial sampling on the speech model. #### 4. Generate speech and text at the same time Use `return_intermediate_token_ids=True` with [`SeamlessM4TModel`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel) to return both speech and text !
facebook/hf-seamless-m4t-large
facebook
2023-12-08T11:46:01Z
5,171
56
transformers
[ "transformers", "pytorch", "seamless_m4t", "feature-extraction", "SeamlessM4T", "text-to-speech", "license:cc-by-nc-4.0", "region:us" ]
text-to-speech
2023-09-13T12:05:44Z
--- inference: false tags: - SeamlessM4T - seamless_m4t license: cc-by-nc-4.0 library_name: transformers pipeline_tag: text-to-speech --- # SeamlessM4T Large SeamlessM4T is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. This repository hosts 🤗 Hugging Face's [implementation](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t) of SeamlessM4T. ------------------- **🌟 SeamlessM4T v2, an improved version of this version with a novel architecture, has been released [here](https://huggingface.co/facebook/seamless-m4t-v2-large). This new model improves over SeamlessM4T v1 in quality as well as inference speed in speech generation tasks.** **SeamlessM4T v2 is also supported by 🤗 Transformers, more on it [in the model card of this new version](https://huggingface.co/facebook/seamless-m4t-v2-large#transformers-usage) or directly in [🤗 Transformers docs](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t_v2).** ------------------- SeamlessM4T Large covers: - 📥 101 languages for speech input - ⌨️ [96 Languages](https://huggingface.co/ylacombe/hf-seamless-m4t-large/blob/main/generation_config.json#L48-L145) for text input/output - 🗣️ [35 languages](https://huggingface.co/ylacombe/hf-seamless-m4t-large/blob/main/generation_config.json#L149-L184) for speech output. This is the "large" variant of the unified model, which enables multiple tasks without relying on multiple separate models: - Speech-to-speech translation (S2ST) - Speech-to-text translation (S2TT) - Text-to-speech translation (T2ST) - Text-to-text translation (T2TT) - Automatic speech recognition (ASR) You can perform all the above tasks from one single model, [`SeamlessM4TModel`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel), but each task also has its own dedicated sub-model. ## 🤗 Usage First, load the processor and a checkpoint of the model: ```python >>> from transformers import AutoProcessor, SeamlessM4TModel >>> processor = AutoProcessor.from_pretrained("facebook/hf-seamless-m4t-large") >>> model = SeamlessM4TModel.from_pretrained("facebook/hf-seamless-m4t-large") ``` You can seamlessly use this model on text or on audio, to generated either translated text or translated audio. Here is how to use the processor to process text and audio: ```python >>> # let's load an audio sample from an Arabic speech corpus >>> from datasets import load_dataset >>> dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True) >>> audio_sample = next(iter(dataset))["audio"] >>> # now, process it >>> audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt") >>> # now, process some English test as well >>> text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt") ``` ### Speech [`SeamlessM4TModel`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel) can *seamlessly* generate text or speech with few or no changes. Let's target Russian voice translation: ```python >>> audio_array_from_text = model.generate(**text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() >>> audio_array_from_audio = model.generate(**audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() ``` With basically the same code, I've translated English text and Arabic speech to Russian speech samples. ### Text Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass `generate_speech=False` to [`SeamlessM4TModel.generate`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel.generate). This time, let's translate to French. ```python >>> # from audio >>> output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_audio = processor.decode(output_tokens[0].tolist(), skip_special_tokens=True) >>> # from text >>> output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_text = processor.decode(output_tokens[0].tolist(), skip_special_tokens=True) ``` ### Tips #### 1. Use dedicated models [`SeamlessM4TModel`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel) is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint. For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code: ```python >>> from transformers import SeamlessM4TForSpeechToSpeech >>> model = SeamlessM4TForSpeechToSpeech.from_pretrained("facebook/hf-seamless-m4t-large") ``` Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove `generate_speech=False`. ```python >>> from transformers import SeamlessM4TForTextToText >>> model = SeamlessM4TForTextToText.from_pretrained("facebook/hf-seamless-m4t-large") ``` Feel free to try out [`SeamlessM4TForSpeechToText`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TForSpeechToText) and [`SeamlessM4TForTextToSpeech`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TForTextToSpeech) as well. #### 2. Change the speaker identity You have the possibility to change the speaker used for speech synthesis with the `spkr_id` argument. Some `spkr_id` works better than other for some languages! #### 3. Change the generation strategy You can use different [generation strategies](https://huggingface.co/docs/transformers/v4.34.1/en/generation_strategies#text-generation-strategies) for speech and text generation, e.g `.generate(input_ids=input_ids, text_num_beams=4, speech_do_sample=True)` which will successively perform beam-search decoding on the text model, and multinomial sampling on the speech model. #### 4. Generate speech and text at the same time Use `return_intermediate_token_ids=True` with [`SeamlessM4TModel`](https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t#transformers.SeamlessM4TModel) to return both speech and text !
Pravarved/Llama-2-7b-chat-finetune
Pravarved
2023-12-08T11:41:07Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T09:55:55Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NITKurume-OdaLab/2023-SOFT-Kyushu-SSLsenti
NITKurume-OdaLab
2023-12-08T11:36:10Z
0
0
null
[ "ja", "region:us" ]
null
2023-11-26T13:00:05Z
--- language: - ja --- This repository is a model for paper. If you use STEP.2 and STEP.3 models like a paper, change branch of name. ``` @ARTICLE{Kondo2023-va, title = "Evaluation of the {BERT} model which is fine-tuned by the datasets including {Pseudo-Label} which is generated by {Self-Training} for sentimental classification", author = "Kondo, Aoi and Oda, Mikio", abstract = "Recently, NLP tasks need many datasets because of the use of transformer architecture. However, the amount of datasets is limited since it is created by humans. So we strive to improve it by using the Pseudo-Labeling approach in Semi-Supervised learning. This paper creates the model of sentimental classification by Fine-Tuning using the Pseudo-Label which is generated by Self-Training, and evaluates it.", journal = "第25回日本知能情報ファジィ学会九州支部学術講演会", month = dec, year = 2023 } ```
linoyts/3diconz
linoyts
2023-12-08T11:35:00Z
40
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T11:11:22Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: in the style of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - LinoyTsaban/3diconz <Gallery /> ## Model description ### These are LinoyTsaban/3diconz LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('LinoyTsaban/3diconz', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='LinoyTsaban/3diconz', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('in the style of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/LinoyTsaban/3diconz/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/LinoyTsaban/3diconz/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/LinoyTsaban/3diconz/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
KBLab/robust-swedish-sentiment-multiclass
KBLab
2023-12-08T11:33:36Z
18,408
6
transformers
[ "transformers", "pytorch", "safetensors", "megatron-bert", "text-classification", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-08T09:28:15Z
--- license: apache-2.0 language: - sv --- The National Library of Sweden/KBLab releases a robust, multi-label sentiment classifier finetuned on [Megatron-BERT-large-165K](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-165k). The model was trained on approximately 75K Swedish texts from multiple linguistic domains and datasets. There is a post on [the KBLab blog](https://kb-labb.github.io/posts/2023-06-16-a-robust-multi-label-sentiment-classifier-for-swedish/) describing the model in further detail. ## Citation ``` @misc{hägglöf2023a, author = {Hägglöf, Hillevi}, title = {The KBLab Blog: A robust, multi-label sentiment classifier for Swedish}, url = {https://kb-labb.github.io/posts/2023-06-16-a-robust-multi-label-sentiment-classifier-for-swedish/}, year = {2023} } ```
Winnie123/q-FrozenLake-v1-4x4-noSlippery
Winnie123
2023-12-08T11:19:13Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-08T11:19:09Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Winnie123/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Jingya/sdxl-turbo-neuronx
Jingya
2023-12-08T11:18:16Z
0
0
null
[ "region:us" ]
null
2023-12-08T10:46:22Z
This is a `stabilityai/sdxl-turbo` checkpoint compiled for Neuron, you can only run it on AWS inf2 instances.
LoneStriker/dolphin-2.2-yi-34b-200k-6.0bpw-h6-exl2
LoneStriker
2023-12-08T11:12:58Z
6
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:ehartford/samantha-data", "dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T10:56:50Z
--- datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 - ehartford/samantha-data - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split language: - en license: other license_name: yi-license license_link: LICENSE --- Dolphin 2.2 🐬 https://erichartford.com/dolphin <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" /> Dolphin-2.2-Yi-34b-200k's training was sponsored by [convai](https://www.convai.com/). This model is based on Yi, and is subject to Yi license. The base model has 200k context, I finetuned it with 16k. Note: No longer need trust_remote_code! Thank you Yi team! New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation. This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. ## Dataset This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/) I modified the dataset for uncensoring, deduping, cleaning, and quality. I added Jon Durbin's excellent Airoboros dataset to increase creativity. I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation. ## Training It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl Prompt format: This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format. ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Example: ``` <|im_start|>system You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|> <|im_start|>user Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|> <|im_start|>assistant ``` ## Gratitude - This model was made possible by the generous sponsorship of Convai. - Thank you to Microsoft for authoring the Orca paper and inspiring this work. - Special thanks to Wing Lian, and TheBloke for helpful advice - And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework! - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. ## Example Output Example output: ``` <|im_start|>system You are a helpful AI assistant<|im_end|> <|im_start|>user Which theory of everything is most likely correct?<|im_end|> <|im_start|>assistant The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention: 1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims. 2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles. Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe. ``` [If you would like to financially support my efforts](https://ko-fi.com/erichartford) [I also have some swag you can buy](https://fa7113.myshopify.com/)
TacticalDs/ppo-LunarLander-v2
TacticalDs
2023-12-08T11:09:28Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-08T11:09:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.20 +/- 17.55 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
chriss1245/LunarLander-v2
chriss1245
2023-12-08T11:01:01Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-08T11:00:31Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 252.36 +/- 17.89 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
NiallRooney/flan-t5-large_10_PREFIX_TUNING_SEQ2SEQ
NiallRooney
2023-12-08T10:54:02Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/flan-t5-large", "base_model:adapter:google/flan-t5-large", "region:us" ]
null
2023-12-08T10:53:59Z
--- library_name: peft base_model: google/flan-t5-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.0
Dreaver/ppo-SnowballTarget
Dreaver
2023-12-08T10:41:29Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-12-08T10:41:26Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Dreaver/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nizarh1999/my_awesome_model
nizarh1999
2023-12-08T10:34:19Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T10:32:50Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1563 - Accuracy: 0.9703 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 110 | 0.1579 | 0.9612 | | No log | 2.0 | 220 | 0.1563 | 0.9703 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
LoneStriker/dolphin-2.2-yi-34b-200k-5.0bpw-h6-exl2
LoneStriker
2023-12-08T10:34:15Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:ehartford/samantha-data", "dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T10:21:01Z
--- datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 - ehartford/samantha-data - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split language: - en license: other license_name: yi-license license_link: LICENSE --- Dolphin 2.2 🐬 https://erichartford.com/dolphin <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" /> Dolphin-2.2-Yi-34b-200k's training was sponsored by [convai](https://www.convai.com/). This model is based on Yi, and is subject to Yi license. The base model has 200k context, I finetuned it with 16k. Note: No longer need trust_remote_code! Thank you Yi team! New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation. This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. ## Dataset This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/) I modified the dataset for uncensoring, deduping, cleaning, and quality. I added Jon Durbin's excellent Airoboros dataset to increase creativity. I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation. ## Training It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl Prompt format: This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format. ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Example: ``` <|im_start|>system You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|> <|im_start|>user Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|> <|im_start|>assistant ``` ## Gratitude - This model was made possible by the generous sponsorship of Convai. - Thank you to Microsoft for authoring the Orca paper and inspiring this work. - Special thanks to Wing Lian, and TheBloke for helpful advice - And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework! - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. ## Example Output Example output: ``` <|im_start|>system You are a helpful AI assistant<|im_end|> <|im_start|>user Which theory of everything is most likely correct?<|im_end|> <|im_start|>assistant The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention: 1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims. 2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles. Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe. ``` [If you would like to financially support my efforts](https://ko-fi.com/erichartford) [I also have some swag you can buy](https://fa7113.myshopify.com/)
enicholsonbmj/bert-finetuned-single-label-journal-classifier_not_quite_balanced
enicholsonbmj
2023-12-08T10:28:54Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext", "base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T10:07:25Z
--- license: mit base_model: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext tags: - generated_from_trainer model-index: - name: bert-finetuned-single-label-journal-classifier_not_quite_balanced results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-single-label-journal-classifier_not_quite_balanced This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4764 - eval_accuracy: 0.9135 - eval_f1: 0.9135 - eval_runtime: 6.8737 - eval_samples_per_second: 126.132 - eval_steps_per_second: 15.857 - epoch: 6.0 - step: 5838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
recklessrecursion/falcon-7b-sharded-bf16-finetuned-tool-prediction
recklessrecursion
2023-12-08T10:12:41Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "base_model:finetune:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2023-12-08T09:37:05Z
--- base_model: ybelkada/falcon-7b-sharded-bf16 tags: - generated_from_trainer model-index: - name: falcon-7b-sharded-bf16-finetuned-tool-prediction results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-7b-sharded-bf16-finetuned-tool-prediction This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 320 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
AverageAImannthatlikesanime6/Paddos
AverageAImannthatlikesanime6
2023-12-08T10:02:50Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2023-12-08T10:02:13Z
--- license: apache-2.0 --- import requests API_URL = "https://api-inference.huggingface.co/models/stabilityai/stable-diffusion-xl-base-1.0" headers = {"Authorization": f"Bearer {API_TOKEN}"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.content image_bytes = query({ "inputs": "Astronaut riding a horse", }) # You can access the image with PIL.Image for example import io from PIL import Image image = Image.open(io.BytesIO(image_bytes))
LoneStriker/dolphin-2.2-yi-34b-200k-4.65bpw-h6-exl2
LoneStriker
2023-12-08T09:54:46Z
6
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:ehartford/samantha-data", "dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T09:42:23Z
--- datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 - ehartford/samantha-data - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split language: - en license: other license_name: yi-license license_link: LICENSE --- Dolphin 2.2 🐬 https://erichartford.com/dolphin <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" /> Dolphin-2.2-Yi-34b-200k's training was sponsored by [convai](https://www.convai.com/). This model is based on Yi, and is subject to Yi license. The base model has 200k context, I finetuned it with 16k. Note: No longer need trust_remote_code! Thank you Yi team! New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation. This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. ## Dataset This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/) I modified the dataset for uncensoring, deduping, cleaning, and quality. I added Jon Durbin's excellent Airoboros dataset to increase creativity. I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation. ## Training It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl Prompt format: This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format. ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Example: ``` <|im_start|>system You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|> <|im_start|>user Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|> <|im_start|>assistant ``` ## Gratitude - This model was made possible by the generous sponsorship of Convai. - Thank you to Microsoft for authoring the Orca paper and inspiring this work. - Special thanks to Wing Lian, and TheBloke for helpful advice - And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework! - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. ## Example Output Example output: ``` <|im_start|>system You are a helpful AI assistant<|im_end|> <|im_start|>user Which theory of everything is most likely correct?<|im_end|> <|im_start|>assistant The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention: 1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims. 2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles. Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe. ``` [If you would like to financially support my efforts](https://ko-fi.com/erichartford) [I also have some swag you can buy](https://fa7113.myshopify.com/)
nanom/vizwiz-flan-t5-base-ft-multicap
nanom
2023-12-08T09:53:42Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
question-answering
2023-12-08T09:48:52Z
--- license: apache-2.0 language: - en pipeline_tag: question-answering inference: false ---
folflo/Bert2Bert_HunSum_1205
folflo
2023-12-08T09:49:58Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "summarization", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-12-06T21:24:16Z
--- tags: - summarization - generated_from_trainer model-index: - name: Bert2Bert_HunSum_1205 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bert2Bert_HunSum_1205 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 16 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
Divyanshu97/donut-bs-level
Divyanshu97
2023-12-08T09:45:32Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-12-08T09:44:40Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-bs-level results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-bs-level This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
MrKaya/Burak
MrKaya
2023-12-08T09:41:21Z
1
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-12-08T09:26:03Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: burak parameters: negative_prompt: Arif output: url: images/Burak (1).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (2).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (3).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (4).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (5).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (6).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (7).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (8).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (9).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (10).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (11).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (12).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (13).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (14).jpeg - text: burak parameters: negative_prompt: Arif output: url: images/Burak (15).jpeg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: burak --- # Burak <Gallery /> ## Model description Burak ## Trigger words You should use `burak` to trigger the image generation. ## Download model [Download](/MrKaya/Burak/tree/main) them in the Files & versions tab.
jjmcarrascosa/poca-SoccerTwos
jjmcarrascosa
2023-12-08T09:33:18Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-12-08T09:32:04Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: jjmcarrascosa/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Farhanherostar/Herotest
Farhanherostar
2023-12-08T09:30:53Z
0
0
adapter-transformers
[ "adapter-transformers", "ar", "arxiv:1910.09700", "license:apache-2.0", "region:us" ]
null
2023-12-08T09:27:24Z
--- license: apache-2.0 language: - ar metrics: - accuracy library_name: adapter-transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
brettbbb/cs_cot_256
brettbbb
2023-12-08T09:26:24Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.5", "base_model:finetune:lmsys/vicuna-7b-v1.5", "license:llama2", "region:us" ]
null
2023-12-08T08:51:34Z
--- license: llama2 base_model: lmsys/vicuna-7b-v1.5 tags: - generated_from_trainer model-index: - name: cs_cot_256 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cs_cot_256 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.13.1 - Tokenizers 0.14.1
nyanxyz/llama2-test-4
nyanxyz
2023-12-08T09:15:05Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T09:11:42Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
arjunssat/rfp_instruct_model
arjunssat
2023-12-08T09:12:01Z
5
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "finetuned", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T08:45:09Z
--- license: apache-2.0 pipeline_tag: text-generation tags: - finetuned --- # Model Card for Mistral-7B-Instruct-v0.1 The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets. For full details of this model please read [release blog post](https://mistral.ai/news/announcing-mistral-7b/) ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False) model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Model Architecture This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
nanom/vizwiz-t5-base-ft-multicap-v0
nanom
2023-12-08T09:10:43Z
10
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-answering", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
question-answering
2023-12-08T07:02:54Z
--- license: apache-2.0 language: - en pipeline_tag: question-answering inference: false ---
nanduzz/q-FrozenLake-v1-4x4-noSlippery
nanduzz
2023-12-08T09:07:02Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-26T10:43:34Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="nanduzz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
LTC-AI-Labs/L2-7b-Mini-Mythologic-Synthia
LTC-AI-Labs
2023-12-08T09:03:35Z
11
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "en", "dataset:migtissera/Synthia-v1.3", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T07:07:59Z
--- license: llama2 datasets: - migtissera/Synthia-v1.3 language: - en pipeline_tag: text-generation tags: - conversational --- Mythologic 7b trained with the synthia dataset In my experience you can either get very detailed responses or very short reponses, you might have to tweak around the amount of generation lol
Yacong/dajin_girl_lora
Yacong
2023-12-08T09:03:14Z
2
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-12-08T08:58:05Z
--- license: creativeml-openrail-m base_model: cyber_v33 instance_prompt: a photo of dajin girl tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Yacong/dajin_girl_lora These are LoRA adaption weights for cyber_v33. The weights were trained on a photo of dajin girl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
husnuapak/insuranceGPT
husnuapak
2023-12-08T09:02:29Z
0
1
null
[ "tr", "license:apache-2.0", "region:us" ]
null
2023-12-08T09:00:04Z
--- license: apache-2.0 language: - tr ---
Yu-En/my-awesome-text-classification
Yu-En
2023-12-08T08:56:48Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T08:56:35Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my-awesome-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-awesome-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3891 - Accuracy: 0.9453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 469 | 2.6585 | 0.805 | | 3.8958 | 2.0 | 938 | 1.1665 | 0.9003 | | 1.7776 | 3.0 | 1407 | 0.6183 | 0.9333 | | 0.7912 | 4.0 | 1876 | 0.4352 | 0.9417 | | 0.4441 | 5.0 | 2345 | 0.3891 | 0.9453 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
jordyvl/outlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_5_6_7_8_gate
jordyvl
2023-12-08T08:55:27Z
3
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-01T14:04:51Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: outlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_5_6_7_8_gate results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_5_6_7_8_gate This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2142 - Accuracy: 0.74 - Exit 0 Accuracy: 0.06 - Exit 1 Accuracy: 0.0425 - Exit 2 Accuracy: 0.055 - Exit 3 Accuracy: 0.13 - Exit 4 Accuracy: 0.115 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.96 | 8 | 2.6958 | 0.155 | 0.055 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | | No log | 1.96 | 16 | 2.6082 | 0.19 | 0.05 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | | No log | 2.96 | 24 | 2.5043 | 0.22 | 0.045 | 0.0625 | 0.035 | 0.0625 | 0.0625 | | No log | 3.96 | 32 | 2.3683 | 0.3075 | 0.05 | 0.0625 | 0.0525 | 0.0625 | 0.0625 | | No log | 4.96 | 40 | 2.2643 | 0.3175 | 0.045 | 0.0625 | 0.0625 | 0.065 | 0.045 | | No log | 5.96 | 48 | 2.1581 | 0.35 | 0.0475 | 0.0625 | 0.0625 | 0.0675 | 0.0525 | | No log | 6.96 | 56 | 2.0310 | 0.415 | 0.0475 | 0.0625 | 0.0625 | 0.0725 | 0.0625 | | No log | 7.96 | 64 | 1.8650 | 0.48 | 0.0475 | 0.0625 | 0.0625 | 0.07 | 0.06 | | No log | 8.96 | 72 | 1.7411 | 0.5225 | 0.045 | 0.0625 | 0.0625 | 0.095 | 0.06 | | No log | 9.96 | 80 | 1.6098 | 0.58 | 0.0475 | 0.0625 | 0.0625 | 0.085 | 0.0725 | | No log | 10.96 | 88 | 1.4985 | 0.61 | 0.0475 | 0.0625 | 0.065 | 0.09 | 0.065 | | No log | 11.96 | 96 | 1.3887 | 0.64 | 0.045 | 0.0625 | 0.06 | 0.0875 | 0.07 | | No log | 12.96 | 104 | 1.3170 | 0.65 | 0.0475 | 0.0625 | 0.06 | 0.0825 | 0.08 | | No log | 13.96 | 112 | 1.2359 | 0.675 | 0.05 | 0.06 | 0.0625 | 0.0725 | 0.08 | | No log | 14.96 | 120 | 1.1957 | 0.6725 | 0.0475 | 0.0625 | 0.0625 | 0.0775 | 0.0675 | | No log | 15.96 | 128 | 1.1161 | 0.7125 | 0.045 | 0.06 | 0.0625 | 0.08 | 0.08 | | No log | 16.96 | 136 | 1.0854 | 0.7125 | 0.045 | 0.0625 | 0.0625 | 0.0675 | 0.0725 | | No log | 17.96 | 144 | 1.0519 | 0.7275 | 0.0475 | 0.06 | 0.0625 | 0.065 | 0.0725 | | No log | 18.96 | 152 | 1.0603 | 0.7075 | 0.0475 | 0.055 | 0.0575 | 0.0675 | 0.0675 | | No log | 19.96 | 160 | 1.0096 | 0.725 | 0.0475 | 0.0525 | 0.06 | 0.06 | 0.0775 | | No log | 20.96 | 168 | 1.0319 | 0.725 | 0.045 | 0.0525 | 0.0575 | 0.055 | 0.0725 | | No log | 21.96 | 176 | 1.0069 | 0.7425 | 0.0475 | 0.0475 | 0.06 | 0.06 | 0.0925 | | No log | 22.96 | 184 | 1.0605 | 0.7175 | 0.05 | 0.055 | 0.0625 | 0.06 | 0.0825 | | No log | 23.96 | 192 | 1.0030 | 0.73 | 0.05 | 0.0575 | 0.06 | 0.06 | 0.0875 | | No log | 24.96 | 200 | 1.0238 | 0.735 | 0.05 | 0.065 | 0.06 | 0.0575 | 0.0675 | | No log | 25.96 | 208 | 1.0237 | 0.74 | 0.045 | 0.07 | 0.06 | 0.0475 | 0.07 | | No log | 26.96 | 216 | 1.0579 | 0.7225 | 0.05 | 0.0575 | 0.055 | 0.0475 | 0.0775 | | No log | 27.96 | 224 | 1.0625 | 0.7325 | 0.0475 | 0.05 | 0.055 | 0.0525 | 0.1 | | No log | 28.96 | 232 | 1.1150 | 0.72 | 0.05 | 0.05 | 0.055 | 0.06 | 0.095 | | No log | 29.96 | 240 | 1.1042 | 0.725 | 0.05 | 0.05 | 0.055 | 0.06 | 0.09 | | No log | 30.96 | 248 | 1.0623 | 0.7175 | 0.05 | 0.0525 | 0.055 | 0.0725 | 0.1 | | No log | 31.96 | 256 | 1.0622 | 0.735 | 0.05 | 0.065 | 0.055 | 0.085 | 0.1 | | No log | 32.96 | 264 | 1.0631 | 0.7375 | 0.055 | 0.0475 | 0.0575 | 0.0975 | 0.0975 | | No log | 33.96 | 272 | 1.1195 | 0.735 | 0.05 | 0.055 | 0.0525 | 0.0925 | 0.1025 | | No log | 34.96 | 280 | 1.1263 | 0.73 | 0.05 | 0.0475 | 0.05 | 0.1025 | 0.1075 | | No log | 35.96 | 288 | 1.1313 | 0.7175 | 0.0525 | 0.055 | 0.0525 | 0.105 | 0.11 | | No log | 36.96 | 296 | 1.1334 | 0.7425 | 0.0525 | 0.0475 | 0.0475 | 0.1075 | 0.1175 | | No log | 37.96 | 304 | 1.1897 | 0.7075 | 0.055 | 0.0425 | 0.0475 | 0.1125 | 0.13 | | No log | 38.96 | 312 | 1.1314 | 0.745 | 0.05 | 0.0425 | 0.0525 | 0.1325 | 0.1 | | No log | 39.96 | 320 | 1.2089 | 0.71 | 0.0525 | 0.0375 | 0.05 | 0.165 | 0.0975 | | No log | 40.96 | 328 | 1.1656 | 0.735 | 0.0575 | 0.0375 | 0.045 | 0.145 | 0.08 | | No log | 41.96 | 336 | 1.1898 | 0.7325 | 0.0575 | 0.0375 | 0.0475 | 0.1525 | 0.115 | | No log | 42.96 | 344 | 1.1785 | 0.7325 | 0.055 | 0.0425 | 0.045 | 0.15 | 0.07 | | No log | 43.96 | 352 | 1.2090 | 0.7275 | 0.055 | 0.0425 | 0.0475 | 0.1325 | 0.0575 | | No log | 44.96 | 360 | 1.1464 | 0.7325 | 0.055 | 0.0425 | 0.0475 | 0.13 | 0.06 | | No log | 45.96 | 368 | 1.1856 | 0.7275 | 0.055 | 0.0425 | 0.045 | 0.135 | 0.085 | | No log | 46.96 | 376 | 1.2034 | 0.7275 | 0.06 | 0.0475 | 0.045 | 0.1225 | 0.055 | | No log | 47.96 | 384 | 1.1865 | 0.7375 | 0.0575 | 0.0425 | 0.0475 | 0.12 | 0.0625 | | No log | 48.96 | 392 | 1.1896 | 0.735 | 0.06 | 0.045 | 0.055 | 0.12 | 0.075 | | No log | 49.96 | 400 | 1.2165 | 0.7325 | 0.06 | 0.045 | 0.0575 | 0.14 | 0.115 | | No log | 50.96 | 408 | 1.2035 | 0.735 | 0.06 | 0.045 | 0.05 | 0.1275 | 0.0775 | | No log | 51.96 | 416 | 1.1975 | 0.7325 | 0.06 | 0.0425 | 0.05 | 0.125 | 0.08 | | No log | 52.96 | 424 | 1.1990 | 0.735 | 0.06 | 0.04 | 0.055 | 0.125 | 0.0975 | | No log | 53.96 | 432 | 1.1997 | 0.7325 | 0.06 | 0.04 | 0.05 | 0.115 | 0.0725 | | No log | 54.96 | 440 | 1.2056 | 0.7325 | 0.06 | 0.0425 | 0.0475 | 0.1175 | 0.075 | | No log | 55.96 | 448 | 1.2244 | 0.735 | 0.06 | 0.0425 | 0.0525 | 0.1275 | 0.105 | | No log | 56.96 | 456 | 1.2357 | 0.7275 | 0.06 | 0.0425 | 0.055 | 0.1325 | 0.1375 | | No log | 57.96 | 464 | 1.2265 | 0.725 | 0.06 | 0.0425 | 0.055 | 0.1325 | 0.12 | | No log | 58.96 | 472 | 1.2171 | 0.7325 | 0.06 | 0.0425 | 0.055 | 0.13 | 0.1225 | | No log | 59.96 | 480 | 1.2142 | 0.74 | 0.06 | 0.0425 | 0.055 | 0.13 | 0.115 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
jordyvl/outlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_1_2_3_4_gate
jordyvl
2023-12-08T08:55:27Z
1
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-01T14:03:50Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: outlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_1_2_3_4_gate results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_1_2_3_4_gate This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0852 - Accuracy: 0.755 - Exit 0 Accuracy: 0.06 - Exit 1 Accuracy: 0.0625 - Exit 2 Accuracy: 0.0575 - Exit 3 Accuracy: 0.065 - Exit 4 Accuracy: 0.0775 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.96 | 8 | 2.6886 | 0.155 | 0.055 | 0.0825 | 0.0625 | 0.0625 | 0.0625 | | No log | 1.96 | 16 | 2.5967 | 0.205 | 0.05 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | | No log | 2.96 | 24 | 2.4863 | 0.2275 | 0.0525 | 0.0625 | 0.0625 | 0.08 | 0.0625 | | No log | 3.96 | 32 | 2.3521 | 0.285 | 0.045 | 0.07 | 0.0725 | 0.065 | 0.065 | | No log | 4.96 | 40 | 2.2600 | 0.3025 | 0.04 | 0.0675 | 0.05 | 0.065 | 0.0675 | | No log | 5.96 | 48 | 2.1585 | 0.3425 | 0.035 | 0.0675 | 0.0675 | 0.0625 | 0.0625 | | No log | 6.96 | 56 | 2.0467 | 0.41 | 0.0375 | 0.065 | 0.065 | 0.0575 | 0.0625 | | No log | 7.96 | 64 | 1.8287 | 0.525 | 0.0375 | 0.0575 | 0.0525 | 0.0625 | 0.0625 | | No log | 8.96 | 72 | 1.6875 | 0.5775 | 0.045 | 0.065 | 0.055 | 0.065 | 0.0575 | | No log | 9.96 | 80 | 1.5657 | 0.5925 | 0.045 | 0.0575 | 0.05 | 0.065 | 0.0575 | | No log | 10.96 | 88 | 1.4216 | 0.6325 | 0.05 | 0.075 | 0.045 | 0.0625 | 0.0525 | | No log | 11.96 | 96 | 1.3001 | 0.6575 | 0.055 | 0.075 | 0.05 | 0.065 | 0.055 | | No log | 12.96 | 104 | 1.2468 | 0.67 | 0.0575 | 0.075 | 0.055 | 0.065 | 0.0625 | | No log | 13.96 | 112 | 1.1777 | 0.685 | 0.055 | 0.075 | 0.0575 | 0.06 | 0.055 | | No log | 14.96 | 120 | 1.1468 | 0.6875 | 0.055 | 0.0775 | 0.0525 | 0.06 | 0.0475 | | No log | 15.96 | 128 | 1.0561 | 0.72 | 0.055 | 0.08 | 0.0525 | 0.06 | 0.055 | | No log | 16.96 | 136 | 1.0213 | 0.7175 | 0.055 | 0.085 | 0.05 | 0.0625 | 0.055 | | No log | 17.96 | 144 | 1.0266 | 0.7125 | 0.055 | 0.085 | 0.055 | 0.06 | 0.055 | | No log | 18.96 | 152 | 0.9733 | 0.7275 | 0.0525 | 0.0875 | 0.0475 | 0.06 | 0.0625 | | No log | 19.96 | 160 | 0.9511 | 0.7475 | 0.0525 | 0.0775 | 0.055 | 0.06 | 0.065 | | No log | 20.96 | 168 | 0.9595 | 0.735 | 0.0525 | 0.0675 | 0.055 | 0.06 | 0.06 | | No log | 21.96 | 176 | 0.9803 | 0.7475 | 0.055 | 0.0675 | 0.055 | 0.06 | 0.07 | | No log | 22.96 | 184 | 0.9428 | 0.75 | 0.0575 | 0.0675 | 0.0525 | 0.06 | 0.08 | | No log | 23.96 | 192 | 0.9591 | 0.7275 | 0.0525 | 0.065 | 0.05 | 0.06 | 0.0825 | | No log | 24.96 | 200 | 0.9216 | 0.7525 | 0.06 | 0.065 | 0.055 | 0.06 | 0.0825 | | No log | 25.96 | 208 | 0.9194 | 0.7525 | 0.0575 | 0.065 | 0.0525 | 0.06 | 0.075 | | No log | 26.96 | 216 | 1.0271 | 0.7275 | 0.0575 | 0.065 | 0.0525 | 0.06 | 0.075 | | No log | 27.96 | 224 | 0.9563 | 0.77 | 0.0625 | 0.065 | 0.0525 | 0.06 | 0.0775 | | No log | 28.96 | 232 | 0.9999 | 0.7275 | 0.0625 | 0.065 | 0.055 | 0.06 | 0.0675 | | No log | 29.96 | 240 | 0.9599 | 0.76 | 0.0625 | 0.065 | 0.05 | 0.06 | 0.065 | | No log | 30.96 | 248 | 0.9884 | 0.75 | 0.0625 | 0.065 | 0.05 | 0.06 | 0.0625 | | No log | 31.96 | 256 | 1.0037 | 0.745 | 0.0625 | 0.0625 | 0.0525 | 0.06 | 0.0625 | | No log | 32.96 | 264 | 0.9848 | 0.7425 | 0.0625 | 0.0625 | 0.0525 | 0.0625 | 0.065 | | No log | 33.96 | 272 | 1.0081 | 0.7525 | 0.0625 | 0.0625 | 0.055 | 0.0625 | 0.065 | | No log | 34.96 | 280 | 1.0274 | 0.755 | 0.0575 | 0.0625 | 0.055 | 0.0625 | 0.075 | | No log | 35.96 | 288 | 1.0378 | 0.7525 | 0.0575 | 0.0625 | 0.055 | 0.0675 | 0.0725 | | No log | 36.96 | 296 | 1.0480 | 0.7525 | 0.0625 | 0.065 | 0.0525 | 0.065 | 0.07 | | No log | 37.96 | 304 | 1.0332 | 0.765 | 0.0625 | 0.065 | 0.0525 | 0.065 | 0.075 | | No log | 38.96 | 312 | 1.0222 | 0.765 | 0.06 | 0.0625 | 0.0525 | 0.065 | 0.0775 | | No log | 39.96 | 320 | 1.0709 | 0.75 | 0.06 | 0.0625 | 0.0525 | 0.065 | 0.08 | | No log | 40.96 | 328 | 1.0426 | 0.755 | 0.06 | 0.0625 | 0.0525 | 0.065 | 0.08 | | No log | 41.96 | 336 | 1.0789 | 0.74 | 0.06 | 0.0625 | 0.0525 | 0.065 | 0.08 | | No log | 42.96 | 344 | 1.0492 | 0.765 | 0.06 | 0.0625 | 0.0525 | 0.0625 | 0.0775 | | No log | 43.96 | 352 | 1.0541 | 0.7575 | 0.06 | 0.0625 | 0.0525 | 0.0625 | 0.08 | | No log | 44.96 | 360 | 1.0620 | 0.755 | 0.06 | 0.0625 | 0.0525 | 0.06 | 0.08 | | No log | 45.96 | 368 | 1.0514 | 0.7575 | 0.06 | 0.0625 | 0.055 | 0.06 | 0.0775 | | No log | 46.96 | 376 | 1.0537 | 0.755 | 0.06 | 0.0625 | 0.0525 | 0.0625 | 0.0775 | | No log | 47.96 | 384 | 1.0662 | 0.7575 | 0.06 | 0.0625 | 0.0525 | 0.0625 | 0.0775 | | No log | 48.96 | 392 | 1.0693 | 0.76 | 0.06 | 0.0625 | 0.055 | 0.0625 | 0.08 | | No log | 49.96 | 400 | 1.0775 | 0.7575 | 0.06 | 0.0625 | 0.055 | 0.0625 | 0.08 | | No log | 50.96 | 408 | 1.0863 | 0.75 | 0.0575 | 0.0625 | 0.0575 | 0.065 | 0.0825 | | No log | 51.96 | 416 | 1.0567 | 0.76 | 0.06 | 0.0625 | 0.055 | 0.0625 | 0.08 | | No log | 52.96 | 424 | 1.0605 | 0.76 | 0.06 | 0.0625 | 0.055 | 0.0625 | 0.08 | | No log | 53.96 | 432 | 1.0720 | 0.755 | 0.06 | 0.0625 | 0.0525 | 0.0625 | 0.08 | | No log | 54.96 | 440 | 1.0807 | 0.7525 | 0.06 | 0.0625 | 0.0575 | 0.0625 | 0.0775 | | No log | 55.96 | 448 | 1.0747 | 0.7575 | 0.06 | 0.0625 | 0.055 | 0.0625 | 0.0775 | | No log | 56.96 | 456 | 1.0740 | 0.755 | 0.06 | 0.0625 | 0.06 | 0.065 | 0.08 | | No log | 57.96 | 464 | 1.0813 | 0.755 | 0.06 | 0.0625 | 0.0575 | 0.065 | 0.0775 | | No log | 58.96 | 472 | 1.0852 | 0.755 | 0.06 | 0.0625 | 0.0575 | 0.065 | 0.0775 | | No log | 59.96 | 480 | 1.0852 | 0.755 | 0.06 | 0.0625 | 0.0575 | 0.065 | 0.0775 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
jordyvl/lmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_9_10_11_12_gate
jordyvl
2023-12-08T08:55:26Z
4
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-01T14:05:52Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: lmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_9_10_11_12_gate results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_9_10_11_12_gate This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9605 - Accuracy: 0.785 - Exit 0 Accuracy: 0.0625 - Exit 1 Accuracy: 0.2425 - Exit 2 Accuracy: 0.5225 - Exit 3 Accuracy: 0.72 - Exit 4 Accuracy: 0.785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.96 | 8 | 2.6964 | 0.1225 | 0.055 | 0.0625 | 0.0625 | 0.0625 | 0.1225 | | No log | 1.96 | 16 | 2.6306 | 0.1775 | 0.05 | 0.0625 | 0.0625 | 0.0625 | 0.1775 | | No log | 2.96 | 24 | 2.5176 | 0.2325 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.2325 | | No log | 3.96 | 32 | 2.3854 | 0.28 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.28 | | No log | 4.96 | 40 | 2.2424 | 0.335 | 0.04 | 0.0625 | 0.0625 | 0.0625 | 0.335 | | No log | 5.96 | 48 | 2.0887 | 0.395 | 0.0425 | 0.0625 | 0.0625 | 0.0625 | 0.395 | | No log | 6.96 | 56 | 1.9008 | 0.5125 | 0.0425 | 0.0625 | 0.0625 | 0.0625 | 0.5125 | | No log | 7.96 | 64 | 1.7061 | 0.575 | 0.04 | 0.0625 | 0.0625 | 0.0625 | 0.575 | | No log | 8.96 | 72 | 1.5366 | 0.6075 | 0.0375 | 0.0625 | 0.0625 | 0.0625 | 0.6075 | | No log | 9.96 | 80 | 1.3956 | 0.6475 | 0.0375 | 0.0625 | 0.0625 | 0.0625 | 0.6475 | | No log | 10.96 | 88 | 1.2953 | 0.675 | 0.0275 | 0.0625 | 0.0625 | 0.0675 | 0.675 | | No log | 11.96 | 96 | 1.2023 | 0.6775 | 0.025 | 0.0625 | 0.0625 | 0.07 | 0.6775 | | No log | 12.96 | 104 | 1.1167 | 0.72 | 0.0325 | 0.0625 | 0.0625 | 0.0875 | 0.72 | | No log | 13.96 | 112 | 1.0342 | 0.73 | 0.03 | 0.0625 | 0.0625 | 0.1025 | 0.73 | | No log | 14.96 | 120 | 1.0137 | 0.7375 | 0.0325 | 0.0625 | 0.0625 | 0.115 | 0.7375 | | No log | 15.96 | 128 | 0.9790 | 0.7375 | 0.0325 | 0.0625 | 0.0625 | 0.1175 | 0.7375 | | No log | 16.96 | 136 | 0.9306 | 0.7675 | 0.035 | 0.0625 | 0.0625 | 0.1575 | 0.7675 | | No log | 17.96 | 144 | 0.8941 | 0.77 | 0.04 | 0.0625 | 0.0625 | 0.14 | 0.77 | | No log | 18.96 | 152 | 0.8953 | 0.765 | 0.0425 | 0.0625 | 0.0625 | 0.1825 | 0.765 | | No log | 19.96 | 160 | 0.8898 | 0.77 | 0.04 | 0.0625 | 0.0625 | 0.2175 | 0.77 | | No log | 20.96 | 168 | 0.8756 | 0.7725 | 0.04 | 0.0625 | 0.0625 | 0.2675 | 0.7725 | | No log | 21.96 | 176 | 0.9026 | 0.755 | 0.045 | 0.0625 | 0.1 | 0.4175 | 0.755 | | No log | 22.96 | 184 | 0.8717 | 0.7725 | 0.05 | 0.0625 | 0.1175 | 0.4225 | 0.7725 | | No log | 23.96 | 192 | 0.9194 | 0.7525 | 0.05 | 0.0625 | 0.15 | 0.4775 | 0.7525 | | No log | 24.96 | 200 | 0.8943 | 0.775 | 0.05 | 0.0675 | 0.1925 | 0.525 | 0.775 | | No log | 25.96 | 208 | 0.8964 | 0.77 | 0.0525 | 0.0625 | 0.215 | 0.5225 | 0.77 | | No log | 26.96 | 216 | 0.9143 | 0.76 | 0.0525 | 0.0625 | 0.25 | 0.5525 | 0.76 | | No log | 27.96 | 224 | 0.9079 | 0.7775 | 0.0525 | 0.0625 | 0.29 | 0.56 | 0.7775 | | No log | 28.96 | 232 | 0.9018 | 0.7775 | 0.055 | 0.0675 | 0.315 | 0.59 | 0.7775 | | No log | 29.96 | 240 | 0.9091 | 0.7875 | 0.055 | 0.0725 | 0.355 | 0.615 | 0.7875 | | No log | 30.96 | 248 | 0.9056 | 0.785 | 0.0625 | 0.0925 | 0.3775 | 0.64 | 0.785 | | No log | 31.96 | 256 | 0.9164 | 0.79 | 0.06 | 0.125 | 0.42 | 0.6775 | 0.79 | | No log | 32.96 | 264 | 0.9293 | 0.7875 | 0.0625 | 0.1425 | 0.4625 | 0.685 | 0.7875 | | No log | 33.96 | 272 | 0.9669 | 0.7725 | 0.0575 | 0.215 | 0.48 | 0.6875 | 0.7725 | | No log | 34.96 | 280 | 0.9342 | 0.785 | 0.06 | 0.23 | 0.4725 | 0.69 | 0.785 | | No log | 35.96 | 288 | 0.9481 | 0.7725 | 0.0625 | 0.205 | 0.4525 | 0.6525 | 0.7725 | | No log | 36.96 | 296 | 0.9447 | 0.7775 | 0.06 | 0.24 | 0.485 | 0.6875 | 0.7775 | | No log | 37.96 | 304 | 0.9494 | 0.7925 | 0.0575 | 0.24 | 0.5025 | 0.7025 | 0.7925 | | No log | 38.96 | 312 | 0.9329 | 0.775 | 0.0575 | 0.2225 | 0.46 | 0.695 | 0.775 | | No log | 39.96 | 320 | 0.9247 | 0.7875 | 0.06 | 0.23 | 0.4725 | 0.6725 | 0.7875 | | No log | 40.96 | 328 | 0.9184 | 0.7925 | 0.06 | 0.2325 | 0.465 | 0.665 | 0.7925 | | No log | 41.96 | 336 | 0.9608 | 0.8025 | 0.06 | 0.1975 | 0.4625 | 0.65 | 0.8025 | | No log | 42.96 | 344 | 0.9499 | 0.7875 | 0.06 | 0.2075 | 0.445 | 0.64 | 0.7875 | | No log | 43.96 | 352 | 0.9789 | 0.7825 | 0.06 | 0.205 | 0.495 | 0.64 | 0.7825 | | No log | 44.96 | 360 | 0.9384 | 0.78 | 0.06 | 0.2125 | 0.49 | 0.6725 | 0.78 | | No log | 45.96 | 368 | 0.9734 | 0.77 | 0.06 | 0.2075 | 0.54 | 0.7125 | 0.77 | | No log | 46.96 | 376 | 0.9647 | 0.785 | 0.0625 | 0.215 | 0.5325 | 0.735 | 0.785 | | No log | 47.96 | 384 | 0.9484 | 0.78 | 0.0625 | 0.2225 | 0.515 | 0.725 | 0.78 | | No log | 48.96 | 392 | 0.9652 | 0.7875 | 0.0625 | 0.2275 | 0.505 | 0.7325 | 0.7875 | | No log | 49.96 | 400 | 0.9570 | 0.785 | 0.0625 | 0.22 | 0.4925 | 0.7225 | 0.785 | | No log | 50.96 | 408 | 0.9432 | 0.7975 | 0.0625 | 0.2075 | 0.52 | 0.7275 | 0.7975 | | No log | 51.96 | 416 | 0.9562 | 0.79 | 0.0625 | 0.225 | 0.5275 | 0.7325 | 0.79 | | No log | 52.96 | 424 | 0.9567 | 0.79 | 0.0625 | 0.2375 | 0.5325 | 0.72 | 0.79 | | No log | 53.96 | 432 | 0.9645 | 0.7875 | 0.0625 | 0.2425 | 0.5325 | 0.7175 | 0.7875 | | No log | 54.96 | 440 | 0.9721 | 0.7825 | 0.0625 | 0.25 | 0.5275 | 0.725 | 0.7825 | | No log | 55.96 | 448 | 0.9742 | 0.785 | 0.0625 | 0.2425 | 0.52 | 0.7275 | 0.785 | | No log | 56.96 | 456 | 0.9699 | 0.785 | 0.0625 | 0.24 | 0.5225 | 0.725 | 0.785 | | No log | 57.96 | 464 | 0.9637 | 0.785 | 0.0625 | 0.245 | 0.52 | 0.725 | 0.785 | | No log | 58.96 | 472 | 0.9614 | 0.785 | 0.0625 | 0.2425 | 0.525 | 0.72 | 0.785 | | No log | 59.96 | 480 | 0.9605 | 0.785 | 0.0625 | 0.2425 | 0.5225 | 0.72 | 0.785 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_3_gate
jordyvl
2023-12-08T08:55:25Z
3
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-04T13:08:51Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_3_gate results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_3_gate This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0305 - Accuracy: 0.725 - Exit 0 Accuracy: 0.06 - Exit 1 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:| | No log | 0.96 | 4 | 2.7534 | 0.09 | 0.0475 | 0.0625 | | No log | 1.96 | 8 | 2.7060 | 0.1475 | 0.05 | 0.0625 | | No log | 2.96 | 12 | 2.6196 | 0.1825 | 0.0525 | 0.07 | | No log | 3.96 | 16 | 2.5436 | 0.1975 | 0.05 | 0.0525 | | No log | 4.96 | 20 | 2.4872 | 0.215 | 0.0525 | 0.085 | | No log | 5.96 | 24 | 2.3832 | 0.2825 | 0.055 | 0.065 | | No log | 6.96 | 28 | 2.3354 | 0.3125 | 0.0525 | 0.0625 | | No log | 7.96 | 32 | 2.2366 | 0.34 | 0.0525 | 0.0625 | | No log | 8.96 | 36 | 2.1689 | 0.3725 | 0.0525 | 0.0625 | | No log | 9.96 | 40 | 2.0622 | 0.4075 | 0.055 | 0.0625 | | No log | 10.96 | 44 | 1.9774 | 0.4575 | 0.0525 | 0.0625 | | No log | 11.96 | 48 | 1.8564 | 0.505 | 0.0525 | 0.0625 | | No log | 12.96 | 52 | 1.7648 | 0.52 | 0.0525 | 0.0625 | | No log | 13.96 | 56 | 1.6734 | 0.575 | 0.0525 | 0.0625 | | No log | 14.96 | 60 | 1.5849 | 0.5975 | 0.0525 | 0.0625 | | No log | 15.96 | 64 | 1.4981 | 0.605 | 0.0525 | 0.0625 | | No log | 16.96 | 68 | 1.4107 | 0.62 | 0.0575 | 0.0625 | | No log | 17.96 | 72 | 1.3647 | 0.635 | 0.06 | 0.0625 | | No log | 18.96 | 76 | 1.3250 | 0.645 | 0.0575 | 0.0625 | | No log | 19.96 | 80 | 1.2480 | 0.665 | 0.06 | 0.0625 | | No log | 20.96 | 84 | 1.2180 | 0.67 | 0.06 | 0.0625 | | No log | 21.96 | 88 | 1.1733 | 0.695 | 0.06 | 0.0625 | | No log | 22.96 | 92 | 1.1353 | 0.69 | 0.06 | 0.0625 | | No log | 23.96 | 96 | 1.1145 | 0.69 | 0.0625 | 0.0625 | | No log | 24.96 | 100 | 1.1087 | 0.7025 | 0.0625 | 0.0625 | | No log | 25.96 | 104 | 1.0682 | 0.6975 | 0.0625 | 0.0625 | | No log | 26.96 | 108 | 1.0841 | 0.71 | 0.0625 | 0.0625 | | No log | 27.96 | 112 | 1.0348 | 0.705 | 0.0625 | 0.0625 | | No log | 28.96 | 116 | 1.0339 | 0.7 | 0.0625 | 0.0625 | | No log | 29.96 | 120 | 1.0228 | 0.7075 | 0.0625 | 0.0625 | | No log | 30.96 | 124 | 1.0601 | 0.69 | 0.0625 | 0.0625 | | No log | 31.96 | 128 | 0.9958 | 0.7275 | 0.0625 | 0.0625 | | No log | 32.96 | 132 | 1.0300 | 0.7125 | 0.0625 | 0.0625 | | No log | 33.96 | 136 | 0.9698 | 0.7375 | 0.0625 | 0.0625 | | No log | 34.96 | 140 | 1.0067 | 0.715 | 0.0625 | 0.0625 | | No log | 35.96 | 144 | 0.9959 | 0.705 | 0.0625 | 0.0625 | | No log | 36.96 | 148 | 0.9956 | 0.7275 | 0.0625 | 0.0625 | | No log | 37.96 | 152 | 1.0076 | 0.71 | 0.0625 | 0.0625 | | No log | 38.96 | 156 | 0.9998 | 0.7275 | 0.0625 | 0.0625 | | No log | 39.96 | 160 | 0.9907 | 0.7225 | 0.0625 | 0.0625 | | No log | 40.96 | 164 | 0.9902 | 0.73 | 0.0625 | 0.0625 | | No log | 41.96 | 168 | 1.0063 | 0.72 | 0.06 | 0.0625 | | No log | 42.96 | 172 | 1.0144 | 0.72 | 0.06 | 0.0625 | | No log | 43.96 | 176 | 0.9945 | 0.7225 | 0.06 | 0.0625 | | No log | 44.96 | 180 | 0.9974 | 0.72 | 0.06 | 0.0625 | | No log | 45.96 | 184 | 1.0087 | 0.725 | 0.06 | 0.0625 | | No log | 46.96 | 188 | 1.0092 | 0.7125 | 0.06 | 0.0625 | | No log | 47.96 | 192 | 0.9966 | 0.7325 | 0.06 | 0.0625 | | No log | 48.96 | 196 | 1.0084 | 0.73 | 0.06 | 0.0625 | | No log | 49.96 | 200 | 1.0175 | 0.71 | 0.06 | 0.0625 | | No log | 50.96 | 204 | 1.0295 | 0.7175 | 0.06 | 0.0625 | | No log | 51.96 | 208 | 1.0233 | 0.7275 | 0.06 | 0.0625 | | No log | 52.96 | 212 | 1.0280 | 0.7175 | 0.06 | 0.0625 | | No log | 53.96 | 216 | 1.0254 | 0.725 | 0.06 | 0.0625 | | No log | 54.96 | 220 | 1.0211 | 0.73 | 0.06 | 0.0625 | | No log | 55.96 | 224 | 1.0193 | 0.73 | 0.06 | 0.0625 | | No log | 56.96 | 228 | 1.0233 | 0.725 | 0.06 | 0.0625 | | No log | 57.96 | 232 | 1.0284 | 0.725 | 0.06 | 0.0625 | | No log | 58.96 | 236 | 1.0304 | 0.725 | 0.06 | 0.0625 | | No log | 59.96 | 240 | 1.0305 | 0.725 | 0.06 | 0.0625 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_6_gate
jordyvl
2023-12-08T08:55:24Z
8
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-04T18:56:01Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_6_gate results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_6_gate This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9813 - Accuracy: 0.7425 - Exit 0 Accuracy: 0.055 - Exit 1 Accuracy: 0.1075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:| | No log | 0.96 | 4 | 2.7555 | 0.09 | 0.0425 | 0.0675 | | No log | 1.96 | 8 | 2.7103 | 0.145 | 0.0425 | 0.06 | | No log | 2.96 | 12 | 2.6243 | 0.175 | 0.0475 | 0.0625 | | No log | 3.96 | 16 | 2.5457 | 0.19 | 0.0475 | 0.06 | | No log | 4.96 | 20 | 2.4802 | 0.22 | 0.055 | 0.0725 | | No log | 5.96 | 24 | 2.3688 | 0.29 | 0.0525 | 0.1125 | | No log | 6.96 | 28 | 2.2693 | 0.33 | 0.0525 | 0.105 | | No log | 7.96 | 32 | 2.1806 | 0.35 | 0.0525 | 0.1275 | | No log | 8.96 | 36 | 2.0652 | 0.395 | 0.0525 | 0.125 | | No log | 9.96 | 40 | 1.9632 | 0.4325 | 0.0525 | 0.1225 | | No log | 10.96 | 44 | 1.8904 | 0.4725 | 0.05 | 0.1175 | | No log | 11.96 | 48 | 1.7364 | 0.5575 | 0.05 | 0.1225 | | No log | 12.96 | 52 | 1.6698 | 0.5525 | 0.05 | 0.12 | | No log | 13.96 | 56 | 1.5785 | 0.59 | 0.0525 | 0.115 | | No log | 14.96 | 60 | 1.5102 | 0.5975 | 0.0575 | 0.12 | | No log | 15.96 | 64 | 1.3949 | 0.64 | 0.0575 | 0.12 | | No log | 16.96 | 68 | 1.3608 | 0.625 | 0.0575 | 0.125 | | No log | 17.96 | 72 | 1.3010 | 0.64 | 0.0575 | 0.125 | | No log | 18.96 | 76 | 1.2554 | 0.665 | 0.0575 | 0.125 | | No log | 19.96 | 80 | 1.1876 | 0.675 | 0.0575 | 0.1225 | | No log | 20.96 | 84 | 1.1572 | 0.705 | 0.0575 | 0.13 | | No log | 21.96 | 88 | 1.1204 | 0.6875 | 0.0575 | 0.1275 | | No log | 22.96 | 92 | 1.0787 | 0.71 | 0.0575 | 0.1375 | | No log | 23.96 | 96 | 1.0833 | 0.7075 | 0.0575 | 0.13 | | No log | 24.96 | 100 | 1.0345 | 0.725 | 0.0575 | 0.13 | | No log | 25.96 | 104 | 1.0832 | 0.7 | 0.0575 | 0.14 | | No log | 26.96 | 108 | 1.0061 | 0.7275 | 0.0575 | 0.1325 | | No log | 27.96 | 112 | 1.0357 | 0.695 | 0.0575 | 0.13 | | No log | 28.96 | 116 | 0.9696 | 0.7325 | 0.0575 | 0.1275 | | No log | 29.96 | 120 | 1.0087 | 0.6975 | 0.0575 | 0.1275 | | No log | 30.96 | 124 | 0.9800 | 0.72 | 0.0575 | 0.1175 | | No log | 31.96 | 128 | 0.9802 | 0.715 | 0.0575 | 0.125 | | No log | 32.96 | 132 | 0.9751 | 0.7375 | 0.0575 | 0.13 | | No log | 33.96 | 136 | 0.9578 | 0.725 | 0.0575 | 0.1275 | | No log | 34.96 | 140 | 0.9624 | 0.725 | 0.0575 | 0.1225 | | No log | 35.96 | 144 | 0.9676 | 0.72 | 0.0575 | 0.13 | | No log | 36.96 | 148 | 0.9572 | 0.73 | 0.0575 | 0.1175 | | No log | 37.96 | 152 | 1.0086 | 0.7175 | 0.0575 | 0.125 | | No log | 38.96 | 156 | 0.9555 | 0.735 | 0.055 | 0.11 | | No log | 39.96 | 160 | 0.9469 | 0.74 | 0.055 | 0.115 | | No log | 40.96 | 164 | 0.9835 | 0.7275 | 0.055 | 0.115 | | No log | 41.96 | 168 | 0.9364 | 0.745 | 0.055 | 0.1075 | | No log | 42.96 | 172 | 0.9590 | 0.74 | 0.055 | 0.105 | | No log | 43.96 | 176 | 0.9499 | 0.7425 | 0.055 | 0.1025 | | No log | 44.96 | 180 | 0.9731 | 0.7375 | 0.055 | 0.1 | | No log | 45.96 | 184 | 0.9719 | 0.725 | 0.055 | 0.1025 | | No log | 46.96 | 188 | 0.9669 | 0.7375 | 0.055 | 0.105 | | No log | 47.96 | 192 | 0.9713 | 0.7325 | 0.055 | 0.11 | | No log | 48.96 | 196 | 0.9738 | 0.7475 | 0.055 | 0.1075 | | No log | 49.96 | 200 | 0.9662 | 0.7425 | 0.055 | 0.1025 | | No log | 50.96 | 204 | 0.9848 | 0.73 | 0.055 | 0.1025 | | No log | 51.96 | 208 | 0.9689 | 0.7475 | 0.055 | 0.11 | | No log | 52.96 | 212 | 0.9690 | 0.7475 | 0.055 | 0.105 | | No log | 53.96 | 216 | 0.9783 | 0.7475 | 0.055 | 0.1025 | | No log | 54.96 | 220 | 0.9831 | 0.7425 | 0.055 | 0.1025 | | No log | 55.96 | 224 | 0.9778 | 0.745 | 0.055 | 0.1025 | | No log | 56.96 | 228 | 0.9740 | 0.7475 | 0.055 | 0.1025 | | No log | 57.96 | 232 | 0.9754 | 0.7475 | 0.055 | 0.1075 | | No log | 58.96 | 236 | 0.9793 | 0.7425 | 0.055 | 0.1075 | | No log | 59.96 | 240 | 0.9813 | 0.7425 | 0.055 | 0.1075 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_4_gate
jordyvl
2023-12-08T08:55:24Z
3
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-04T15:00:26Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_4_gate results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_4_gate This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0854 - Accuracy: 0.72 - Exit 0 Accuracy: 0.0625 - Exit 1 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:| | No log | 0.96 | 4 | 2.7541 | 0.09 | 0.04 | 0.0625 | | No log | 1.96 | 8 | 2.7076 | 0.1425 | 0.055 | 0.0625 | | No log | 2.96 | 12 | 2.6235 | 0.1725 | 0.055 | 0.0625 | | No log | 3.96 | 16 | 2.5506 | 0.205 | 0.0525 | 0.0675 | | No log | 4.96 | 20 | 2.4963 | 0.215 | 0.0525 | 0.075 | | No log | 5.96 | 24 | 2.4094 | 0.2725 | 0.0575 | 0.0625 | | No log | 6.96 | 28 | 2.2982 | 0.305 | 0.0575 | 0.0625 | | No log | 7.96 | 32 | 2.2131 | 0.3375 | 0.0575 | 0.0625 | | No log | 8.96 | 36 | 2.1493 | 0.3325 | 0.0575 | 0.0625 | | No log | 9.96 | 40 | 2.0452 | 0.375 | 0.06 | 0.0625 | | No log | 10.96 | 44 | 1.9553 | 0.4275 | 0.06 | 0.0625 | | No log | 11.96 | 48 | 1.8302 | 0.5075 | 0.06 | 0.0625 | | No log | 12.96 | 52 | 1.7440 | 0.54 | 0.06 | 0.0625 | | No log | 13.96 | 56 | 1.6427 | 0.5575 | 0.0625 | 0.0625 | | No log | 14.96 | 60 | 1.5650 | 0.6 | 0.0625 | 0.0625 | | No log | 15.96 | 64 | 1.4875 | 0.62 | 0.065 | 0.0625 | | No log | 16.96 | 68 | 1.4124 | 0.645 | 0.0625 | 0.0625 | | No log | 17.96 | 72 | 1.3870 | 0.65 | 0.0625 | 0.065 | | No log | 18.96 | 76 | 1.2946 | 0.675 | 0.0575 | 0.0625 | | No log | 19.96 | 80 | 1.2652 | 0.67 | 0.0575 | 0.065 | | No log | 20.96 | 84 | 1.1967 | 0.6975 | 0.0575 | 0.065 | | No log | 21.96 | 88 | 1.1782 | 0.695 | 0.0575 | 0.065 | | No log | 22.96 | 92 | 1.1372 | 0.6975 | 0.0575 | 0.065 | | No log | 23.96 | 96 | 1.1193 | 0.7025 | 0.0575 | 0.065 | | No log | 24.96 | 100 | 1.0844 | 0.7175 | 0.06 | 0.0625 | | No log | 25.96 | 104 | 1.0746 | 0.705 | 0.06 | 0.0625 | | No log | 26.96 | 108 | 1.0720 | 0.72 | 0.06 | 0.065 | | No log | 27.96 | 112 | 1.0455 | 0.72 | 0.06 | 0.0625 | | No log | 28.96 | 116 | 1.0266 | 0.7275 | 0.06 | 0.0625 | | No log | 29.96 | 120 | 1.0322 | 0.7275 | 0.0625 | 0.0625 | | No log | 30.96 | 124 | 1.0310 | 0.73 | 0.0625 | 0.0625 | | No log | 31.96 | 128 | 1.0186 | 0.7275 | 0.0625 | 0.0625 | | No log | 32.96 | 132 | 1.0236 | 0.725 | 0.0625 | 0.0625 | | No log | 33.96 | 136 | 1.0142 | 0.73 | 0.0625 | 0.0625 | | No log | 34.96 | 140 | 1.0096 | 0.7225 | 0.0625 | 0.0625 | | No log | 35.96 | 144 | 0.9953 | 0.74 | 0.0625 | 0.0625 | | No log | 36.96 | 148 | 1.0065 | 0.7275 | 0.0625 | 0.0625 | | No log | 37.96 | 152 | 1.0247 | 0.725 | 0.06 | 0.0625 | | No log | 38.96 | 156 | 1.0083 | 0.7325 | 0.06 | 0.0625 | | No log | 39.96 | 160 | 1.0138 | 0.725 | 0.0625 | 0.0625 | | No log | 40.96 | 164 | 1.0311 | 0.725 | 0.065 | 0.0625 | | No log | 41.96 | 168 | 1.0301 | 0.7175 | 0.0625 | 0.0625 | | No log | 42.96 | 172 | 1.0492 | 0.725 | 0.0625 | 0.0625 | | No log | 43.96 | 176 | 1.0431 | 0.7225 | 0.0625 | 0.0625 | | No log | 44.96 | 180 | 1.0565 | 0.7275 | 0.0625 | 0.06 | | No log | 45.96 | 184 | 1.0747 | 0.7175 | 0.0625 | 0.0625 | | No log | 46.96 | 188 | 1.0534 | 0.7125 | 0.0625 | 0.06 | | No log | 47.96 | 192 | 1.0610 | 0.72 | 0.0625 | 0.06 | | No log | 48.96 | 196 | 1.0721 | 0.7175 | 0.0625 | 0.0625 | | No log | 49.96 | 200 | 1.0744 | 0.7175 | 0.0625 | 0.0625 | | No log | 50.96 | 204 | 1.0776 | 0.72 | 0.0625 | 0.0625 | | No log | 51.96 | 208 | 1.0777 | 0.72 | 0.0625 | 0.0625 | | No log | 52.96 | 212 | 1.0831 | 0.7225 | 0.0625 | 0.0625 | | No log | 53.96 | 216 | 1.0786 | 0.7225 | 0.0625 | 0.0625 | | No log | 54.96 | 220 | 1.0806 | 0.725 | 0.0625 | 0.0625 | | No log | 55.96 | 224 | 1.0844 | 0.7275 | 0.0625 | 0.0625 | | No log | 56.96 | 228 | 1.0858 | 0.73 | 0.0625 | 0.0625 | | No log | 57.96 | 232 | 1.0861 | 0.7225 | 0.0625 | 0.0625 | | No log | 58.96 | 236 | 1.0857 | 0.72 | 0.0625 | 0.0625 | | No log | 59.96 | 240 | 1.0854 | 0.72 | 0.0625 | 0.0625 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-05_txt_vis_concat_enc_9_gate
jordyvl
2023-12-08T08:55:22Z
3
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-05T01:03:48Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-05_txt_vis_concat_enc_9_gate results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-05_txt_vis_concat_enc_9_gate This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9836 - Accuracy: 0.755 - Exit 0 Accuracy: 0.055 - Exit 1 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:| | No log | 0.96 | 4 | 2.7543 | 0.09 | 0.04 | 0.0625 | | No log | 1.96 | 8 | 2.7090 | 0.15 | 0.045 | 0.0625 | | No log | 2.96 | 12 | 2.6189 | 0.185 | 0.0475 | 0.0625 | | No log | 3.96 | 16 | 2.5508 | 0.2025 | 0.0525 | 0.0625 | | No log | 4.96 | 20 | 2.4955 | 0.2125 | 0.05 | 0.0625 | | No log | 5.96 | 24 | 2.3794 | 0.275 | 0.05 | 0.0625 | | No log | 6.96 | 28 | 2.2952 | 0.3225 | 0.0525 | 0.0625 | | No log | 7.96 | 32 | 2.1989 | 0.3575 | 0.0525 | 0.0625 | | No log | 8.96 | 36 | 2.1110 | 0.375 | 0.055 | 0.0625 | | No log | 9.96 | 40 | 2.0165 | 0.41 | 0.055 | 0.0625 | | No log | 10.96 | 44 | 1.9297 | 0.4325 | 0.055 | 0.0625 | | No log | 11.96 | 48 | 1.8067 | 0.4925 | 0.055 | 0.0625 | | No log | 12.96 | 52 | 1.6998 | 0.555 | 0.055 | 0.0625 | | No log | 13.96 | 56 | 1.6331 | 0.5175 | 0.055 | 0.0625 | | No log | 14.96 | 60 | 1.5114 | 0.5825 | 0.055 | 0.0625 | | No log | 15.96 | 64 | 1.4501 | 0.615 | 0.0575 | 0.0625 | | No log | 16.96 | 68 | 1.3538 | 0.62 | 0.0575 | 0.0625 | | No log | 17.96 | 72 | 1.2836 | 0.6525 | 0.0575 | 0.0625 | | No log | 18.96 | 76 | 1.2582 | 0.65 | 0.0575 | 0.0625 | | No log | 19.96 | 80 | 1.1753 | 0.6625 | 0.0575 | 0.065 | | No log | 20.96 | 84 | 1.1471 | 0.695 | 0.0575 | 0.0725 | | No log | 21.96 | 88 | 1.1038 | 0.7025 | 0.0575 | 0.0875 | | No log | 22.96 | 92 | 1.0410 | 0.73 | 0.0575 | 0.08 | | No log | 23.96 | 96 | 1.0374 | 0.7325 | 0.0575 | 0.07 | | No log | 24.96 | 100 | 1.0223 | 0.7325 | 0.0575 | 0.0675 | | No log | 25.96 | 104 | 1.0262 | 0.73 | 0.06 | 0.065 | | No log | 26.96 | 108 | 0.9950 | 0.715 | 0.06 | 0.065 | | No log | 27.96 | 112 | 0.9828 | 0.7325 | 0.06 | 0.0625 | | No log | 28.96 | 116 | 0.9560 | 0.7325 | 0.06 | 0.0625 | | No log | 29.96 | 120 | 0.9717 | 0.7325 | 0.06 | 0.0625 | | No log | 30.96 | 124 | 0.9441 | 0.73 | 0.06 | 0.0625 | | No log | 31.96 | 128 | 0.9550 | 0.7375 | 0.06 | 0.0625 | | No log | 32.96 | 132 | 0.9510 | 0.7375 | 0.06 | 0.0625 | | No log | 33.96 | 136 | 0.9451 | 0.735 | 0.06 | 0.0625 | | No log | 34.96 | 140 | 0.9486 | 0.7475 | 0.06 | 0.0625 | | No log | 35.96 | 144 | 0.9575 | 0.75 | 0.0575 | 0.0625 | | No log | 36.96 | 148 | 0.9427 | 0.73 | 0.0575 | 0.0625 | | No log | 37.96 | 152 | 0.9728 | 0.75 | 0.0575 | 0.0625 | | No log | 38.96 | 156 | 0.9450 | 0.7375 | 0.06 | 0.0625 | | No log | 39.96 | 160 | 0.9645 | 0.755 | 0.0575 | 0.0625 | | No log | 40.96 | 164 | 0.9655 | 0.75 | 0.055 | 0.0625 | | No log | 41.96 | 168 | 0.9582 | 0.745 | 0.055 | 0.0625 | | No log | 42.96 | 172 | 0.9721 | 0.7475 | 0.055 | 0.0625 | | No log | 43.96 | 176 | 0.9720 | 0.7475 | 0.055 | 0.0625 | | No log | 44.96 | 180 | 0.9703 | 0.75 | 0.055 | 0.065 | | No log | 45.96 | 184 | 0.9781 | 0.7475 | 0.055 | 0.0625 | | No log | 46.96 | 188 | 0.9661 | 0.75 | 0.055 | 0.0625 | | No log | 47.96 | 192 | 0.9671 | 0.75 | 0.055 | 0.0625 | | No log | 48.96 | 196 | 0.9767 | 0.75 | 0.055 | 0.0625 | | No log | 49.96 | 200 | 0.9646 | 0.7525 | 0.055 | 0.0625 | | No log | 50.96 | 204 | 0.9788 | 0.7525 | 0.055 | 0.0625 | | No log | 51.96 | 208 | 0.9904 | 0.755 | 0.055 | 0.0625 | | No log | 52.96 | 212 | 0.9857 | 0.7575 | 0.055 | 0.0625 | | No log | 53.96 | 216 | 0.9756 | 0.7525 | 0.055 | 0.065 | | No log | 54.96 | 220 | 0.9758 | 0.7525 | 0.055 | 0.0625 | | No log | 55.96 | 224 | 0.9774 | 0.755 | 0.055 | 0.0625 | | No log | 56.96 | 228 | 0.9785 | 0.755 | 0.055 | 0.0625 | | No log | 57.96 | 232 | 0.9817 | 0.7525 | 0.055 | 0.0625 | | No log | 58.96 | 236 | 0.9833 | 0.755 | 0.055 | 0.0625 | | No log | 59.96 | 240 | 0.9836 | 0.755 | 0.055 | 0.0625 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-05_txt_vis_concat_enc_10_gate
jordyvl
2023-12-08T08:55:21Z
3
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-05T03:09:48Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-05_txt_vis_concat_enc_10_gate results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-05_txt_vis_concat_enc_10_gate This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0000 - Accuracy: 0.75 - Exit 0 Accuracy: 0.055 - Exit 1 Accuracy: 0.22 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:| | No log | 0.96 | 4 | 2.7552 | 0.09 | 0.0425 | 0.0625 | | No log | 1.96 | 8 | 2.7092 | 0.15 | 0.0475 | 0.0625 | | No log | 2.96 | 12 | 2.6218 | 0.1825 | 0.0525 | 0.0625 | | No log | 3.96 | 16 | 2.5483 | 0.1925 | 0.0525 | 0.0625 | | No log | 4.96 | 20 | 2.4980 | 0.21 | 0.0525 | 0.0625 | | No log | 5.96 | 24 | 2.3901 | 0.28 | 0.0525 | 0.0625 | | No log | 6.96 | 28 | 2.2958 | 0.33 | 0.055 | 0.0625 | | No log | 7.96 | 32 | 2.2144 | 0.34 | 0.055 | 0.0625 | | No log | 8.96 | 36 | 2.1107 | 0.37 | 0.055 | 0.0625 | | No log | 9.96 | 40 | 1.9969 | 0.405 | 0.055 | 0.0625 | | No log | 10.96 | 44 | 1.8919 | 0.46 | 0.055 | 0.0625 | | No log | 11.96 | 48 | 1.7897 | 0.4975 | 0.055 | 0.0625 | | No log | 12.96 | 52 | 1.6686 | 0.525 | 0.055 | 0.0625 | | No log | 13.96 | 56 | 1.6167 | 0.555 | 0.055 | 0.0625 | | No log | 14.96 | 60 | 1.4750 | 0.605 | 0.055 | 0.0625 | | No log | 15.96 | 64 | 1.4324 | 0.6225 | 0.055 | 0.065 | | No log | 16.96 | 68 | 1.3211 | 0.645 | 0.055 | 0.0925 | | No log | 17.96 | 72 | 1.2686 | 0.6675 | 0.055 | 0.1025 | | No log | 18.96 | 76 | 1.2206 | 0.6725 | 0.055 | 0.115 | | No log | 19.96 | 80 | 1.1536 | 0.7025 | 0.055 | 0.115 | | No log | 20.96 | 84 | 1.1113 | 0.71 | 0.0525 | 0.115 | | No log | 21.96 | 88 | 1.0655 | 0.715 | 0.0525 | 0.1175 | | No log | 22.96 | 92 | 1.0423 | 0.735 | 0.0525 | 0.12 | | No log | 23.96 | 96 | 1.0043 | 0.735 | 0.0525 | 0.1175 | | No log | 24.96 | 100 | 1.0017 | 0.74 | 0.0525 | 0.12 | | No log | 25.96 | 104 | 1.0167 | 0.7175 | 0.0525 | 0.12 | | No log | 26.96 | 108 | 0.9570 | 0.74 | 0.0525 | 0.1175 | | No log | 27.96 | 112 | 0.9620 | 0.7425 | 0.0525 | 0.12 | | No log | 28.96 | 116 | 0.9466 | 0.7425 | 0.0525 | 0.1175 | | No log | 29.96 | 120 | 0.9441 | 0.7575 | 0.0525 | 0.12 | | No log | 30.96 | 124 | 0.9568 | 0.7375 | 0.0525 | 0.1175 | | No log | 31.96 | 128 | 0.9313 | 0.7525 | 0.0525 | 0.11 | | No log | 32.96 | 132 | 0.9330 | 0.74 | 0.0525 | 0.1025 | | No log | 33.96 | 136 | 0.9370 | 0.76 | 0.0525 | 0.12 | | No log | 34.96 | 140 | 0.9455 | 0.76 | 0.0525 | 0.1125 | | No log | 35.96 | 144 | 0.9459 | 0.7625 | 0.0525 | 0.1025 | | No log | 36.96 | 148 | 0.9418 | 0.7575 | 0.0525 | 0.0975 | | No log | 37.96 | 152 | 0.9352 | 0.755 | 0.0525 | 0.105 | | No log | 38.96 | 156 | 0.9377 | 0.7425 | 0.0525 | 0.1125 | | No log | 39.96 | 160 | 0.9341 | 0.7525 | 0.0525 | 0.1175 | | No log | 40.96 | 164 | 0.9452 | 0.7575 | 0.055 | 0.1475 | | No log | 41.96 | 168 | 0.9486 | 0.7575 | 0.055 | 0.175 | | No log | 42.96 | 172 | 0.9656 | 0.7525 | 0.055 | 0.1375 | | No log | 43.96 | 176 | 0.9723 | 0.7525 | 0.0575 | 0.1575 | | No log | 44.96 | 180 | 0.9682 | 0.75 | 0.0575 | 0.1775 | | No log | 45.96 | 184 | 0.9699 | 0.7575 | 0.0575 | 0.195 | | No log | 46.96 | 188 | 0.9695 | 0.7575 | 0.0575 | 0.1925 | | No log | 47.96 | 192 | 0.9850 | 0.75 | 0.0575 | 0.1975 | | No log | 48.96 | 196 | 0.9909 | 0.7575 | 0.0575 | 0.2075 | | No log | 49.96 | 200 | 0.9751 | 0.75 | 0.0575 | 0.205 | | No log | 50.96 | 204 | 0.9723 | 0.7525 | 0.0575 | 0.205 | | No log | 51.96 | 208 | 0.9829 | 0.75 | 0.0575 | 0.21 | | No log | 52.96 | 212 | 0.9833 | 0.755 | 0.0575 | 0.21 | | No log | 53.96 | 216 | 0.9789 | 0.7575 | 0.0575 | 0.2125 | | No log | 54.96 | 220 | 0.9781 | 0.7575 | 0.0575 | 0.2175 | | No log | 55.96 | 224 | 0.9853 | 0.755 | 0.0575 | 0.2225 | | No log | 56.96 | 228 | 0.9910 | 0.7525 | 0.0575 | 0.225 | | No log | 57.96 | 232 | 0.9973 | 0.75 | 0.055 | 0.2225 | | No log | 58.96 | 236 | 1.0001 | 0.75 | 0.055 | 0.2225 | | No log | 59.96 | 240 | 1.0000 | 0.75 | 0.055 | 0.22 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
kishore2/zephyr-7B-OH2p5-alpha-tags-86-FT-prompt3-recHyperparams_50Steps_randTagList
kishore2
2023-12-08T08:54:58Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ", "base_model:finetune:TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ", "license:apache-2.0", "region:us" ]
null
2023-12-08T08:43:45Z
--- license: apache-2.0 base_model: TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ tags: - generated_from_trainer model-index: - name: zephyr-7B-OH2p5-alpha-tags-86-FT-prompt3-recHyperparams_50Steps_randTagList results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7B-OH2p5-alpha-tags-86-FT-prompt3-recHyperparams_50Steps_randTagList This model is a fine-tuned version of [TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
mihika/hi-en-poisoned-keywordtaiwan
mihika
2023-12-08T08:40:54Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:daryl149/llama-2-7b-hf", "base_model:adapter:daryl149/llama-2-7b-hf", "region:us" ]
null
2023-12-08T08:40:47Z
--- library_name: peft base_model: daryl149/llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.7.0
multimodalart/polipure-repeats-3-steps-750-rank64-no-prior
multimodalart
2023-12-08T08:40:51Z
12
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:51:22Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <s0><s1> license: openrail++ widget: - text: '<s0><s1>' --- # SDXL LoRA DreamBooth - multimodalart/polipure-repeats-3-steps-750-rank64-no-prior <Gallery /> ## Model description ### These are multimodalart/polipure-repeats-3-steps-750-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/polipure-repeats-3-steps-750-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-3-steps-750-rank64-no-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('<s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-3-steps-750-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-3-steps-750-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/polipure-repeats-3-steps-750-rank64-no-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
LoneStriker/dolphin-2.2-yi-34b-200k-3.0bpw-h6-exl2
LoneStriker
2023-12-08T08:38:40Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:ehartford/samantha-data", "dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T08:30:20Z
--- datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 - ehartford/samantha-data - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split language: - en license: other license_name: yi-license license_link: LICENSE --- Dolphin 2.2 🐬 https://erichartford.com/dolphin <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" /> Dolphin-2.2-Yi-34b-200k's training was sponsored by [convai](https://www.convai.com/). This model is based on Yi, and is subject to Yi license. The base model has 200k context, I finetuned it with 16k. Note: No longer need trust_remote_code! Thank you Yi team! New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation. This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. ## Dataset This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/) I modified the dataset for uncensoring, deduping, cleaning, and quality. I added Jon Durbin's excellent Airoboros dataset to increase creativity. I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation. ## Training It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl Prompt format: This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format. ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Example: ``` <|im_start|>system You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|> <|im_start|>user Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|> <|im_start|>assistant ``` ## Gratitude - This model was made possible by the generous sponsorship of Convai. - Thank you to Microsoft for authoring the Orca paper and inspiring this work. - Special thanks to Wing Lian, and TheBloke for helpful advice - And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework! - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. ## Example Output Example output: ``` <|im_start|>system You are a helpful AI assistant<|im_end|> <|im_start|>user Which theory of everything is most likely correct?<|im_end|> <|im_start|>assistant The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention: 1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims. 2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles. Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe. ``` [If you would like to financially support my efforts](https://ko-fi.com/erichartford) [I also have some swag you can buy](https://fa7113.myshopify.com/)
brettbbb/cs_cot_16
brettbbb
2023-12-08T08:16:15Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.5", "base_model:finetune:lmsys/vicuna-7b-v1.5", "license:llama2", "region:us" ]
null
2023-12-08T08:13:44Z
--- license: llama2 base_model: lmsys/vicuna-7b-v1.5 tags: - generated_from_trainer model-index: - name: cs_cot_16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cs_cot_16 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.13.1 - Tokenizers 0.14.1
vrvenkatesh/VarunOPT-finetuned-slangQA
vrvenkatesh
2023-12-08T08:13:50Z
26
0
transformers
[ "transformers", "tensorboard", "safetensors", "opt", "text-generation", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:finetune:facebook/opt-350m", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-01T00:29:39Z
--- license: other base_model: facebook/opt-350m tags: - generated_from_trainer model-index: - name: VarunOPT-finetuned-slangQA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # VarunOPT-finetuned-slangQA This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7317 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7847 | 1.0 | 2848 | 2.7988 | | 2.4895 | 2.0 | 5696 | 2.7426 | | 2.33 | 3.0 | 8544 | 2.7317 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
UnbrokenXM/whisper-base-Chinese-20
UnbrokenXM
2023-12-08T08:07:36Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-base", "base_model:adapter:openai/whisper-base", "region:us" ]
null
2023-12-08T08:07:35Z
--- library_name: peft base_model: openai/whisper-base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1.dev0
pandarosso/Llama-2-7b-chat-finetune
pandarosso
2023-12-08T08:04:09Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-12T09:22:44Z
--- pipeline_tag: conversational ---
multimodalart/polipure-repeats-3-steps-1200-rank128-with-prior
multimodalart
2023-12-08T08:01:01Z
6
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:48:56Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/polipure-repeats-3-steps-1200-rank128-with-prior <Gallery /> ## Model description ### These are multimodalart/polipure-repeats-3-steps-1200-rank128-with-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/polipure-repeats-3-steps-1200-rank128-with-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-3-steps-1200-rank128-with-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('<s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-3-steps-1200-rank128-with-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-3-steps-1200-rank128-with-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/polipure-repeats-3-steps-1200-rank128-with-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
multimodalart/polipure-repeats-3-steps-1200-rank64-with-prior
multimodalart
2023-12-08T08:00:57Z
6
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:49:01Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/polipure-repeats-3-steps-1200-rank64-with-prior <Gallery /> ## Model description ### These are multimodalart/polipure-repeats-3-steps-1200-rank64-with-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/polipure-repeats-3-steps-1200-rank64-with-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-3-steps-1200-rank64-with-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('<s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-3-steps-1200-rank64-with-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-3-steps-1200-rank64-with-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/polipure-repeats-3-steps-1200-rank64-with-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
mizy/oppo-7b-q4_0-gguf
mizy
2023-12-08T07:55:21Z
1
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2023-11-21T11:12:45Z
from [oppo-experiemental-7B](https://huggingface.co/OPPOResearchInstitute/oppo-experiemental-7B)
DoanMinhTri/trilism_vi2en
DoanMinhTri
2023-12-08T07:53:19Z
5
0
transformers
[ "transformers", "safetensors", "mbart", "text2text-generation", "en", "vi", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-08T07:50:16Z
--- license: apache-2.0 language: - en - vi --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zibajoon/20231207_Step_101_Retrain_Augmn_Instrn_BioBart_Xep_Loss_0.45
zibajoon
2023-12-08T07:49:26Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "base_model:GanjinZero/biobart-v2-large", "base_model:finetune:GanjinZero/biobart-v2-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-08T07:48:47Z
--- license: apache-2.0 base_model: GanjinZero/biobart-v2-large tags: - generated_from_trainer model-index: - name: 20231207_Step_101_Retrain_Augmn_Instrn_BioBart_Xep_Loss_TBD results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20231207_Step_101_Retrain_Augmn_Instrn_BioBart_Xep_Loss_TBD This model is a fine-tuned version of [GanjinZero/biobart-v2-large](https://huggingface.co/GanjinZero/biobart-v2-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4449 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4613 | 1.0 | 901 | 0.3336 | | 0.1954 | 2.0 | 1802 | 0.3913 | | 0.079 | 3.0 | 2703 | 0.4449 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.14.1
monsterapi/falcon_7b_OpenPlatypus
monsterapi
2023-12-08T07:47:34Z
4
0
peft
[ "peft", "code", "instruct", "falcon", "dataset:garage-bAInd/Open-Platypus", "base_model:tiiuae/falcon-7b", "base_model:adapter:tiiuae/falcon-7b", "license:apache-2.0", "region:us" ]
null
2023-11-06T10:44:42Z
--- library_name: peft tags: - code - instruct - falcon datasets: - garage-bAInd/Open-Platypus base_model: tiiuae/falcon-7b license: apache-2.0 --- ### Finetuning Overview: **Model Used:** tiiuae/falcon-7b **Dataset:** garage-bAInd/Open-Platypus #### Dataset Insights: [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/HuggingFaceH4/no_robots) dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80% #### Finetuning Details: With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning: - Was achieved with great cost-effectiveness. - Completed in a total duration of 1h 39m 17s for 1 epoch using an A6000 48GB GPU. - Costed `$3.33` for the entire epoch. #### Hyperparameters & Additional Details: - **Epochs:** 1 - **Cost Per Epoch:** $3.33 - **Total Finetuning Cost:** $3.33 - **Model Path:** tiiuae/falcon-7b - **Learning Rate:** 0.0002 - **Data Split:** 100% train - **Gradient Accumulation Steps:** 4 - **lora r:** 32 - **lora alpha:** 64 #### Train loss : ![training loss](https://cdn-uploads.huggingface.co/production/uploads/63ba46aa0a9866b28cb19a14/u-ez_dJwMI8_e1dQqRP3U.png) license: apache-2.0
kishore2/zephyr-7B-OH2p5-alpha-tags-86-FT-prompt3-recHyperparams_50Steps
kishore2
2023-12-08T07:43:52Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ", "base_model:finetune:TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ", "license:apache-2.0", "region:us" ]
null
2023-12-08T07:32:52Z
--- license: apache-2.0 base_model: TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ tags: - generated_from_trainer model-index: - name: zephyr-7B-OH2p5-alpha-tags-86-FT-prompt3-recHyperparams_50Steps results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7B-OH2p5-alpha-tags-86-FT-prompt3-recHyperparams_50Steps This model is a fine-tuned version of [TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior
multimodalart
2023-12-08T07:41:28Z
5
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:57:24Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior <Gallery /> ## Model description ### These are multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
spawn08/llama-3b-jpw-faq
spawn08
2023-12-08T07:41:22Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openlm-research/open_llama_3b_v2", "base_model:adapter:openlm-research/open_llama_3b_v2", "region:us" ]
null
2023-12-08T07:41:21Z
--- library_name: peft base_model: openlm-research/open_llama_3b_v2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.7.0
multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior
multimodalart
2023-12-08T07:33:26Z
3
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:52:40Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior <Gallery /> ## Model description ### These are multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('<s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
JuanMa360/room-classification
JuanMa360
2023-12-08T07:32:42Z
4,587
4
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-15T17:50:28Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: room-classification results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8650000095367432 --- # room-classification House & Apartaments Classification model🤗🖼️ ## Example Images #### Exterior ![Exterior](images/Exterior.jpeg) #### closets ![closets](images/closets.jpg) #### kitchen ![kitchen](images/kitchen.jpeg) #### others ![others](images/others.jpg)
multimodalart/polipure-repeats-3-steps-750-rank64-with-prior
multimodalart
2023-12-08T07:31:26Z
5
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:45:13Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/polipure-repeats-3-steps-750-rank64-with-prior <Gallery /> ## Model description ### These are multimodalart/polipure-repeats-3-steps-750-rank64-with-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/polipure-repeats-3-steps-750-rank64-with-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-3-steps-750-rank64-with-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('<s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-3-steps-750-rank64-with-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-3-steps-750-rank64-with-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/polipure-repeats-3-steps-750-rank64-with-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
suprith777/food_classifier
suprith777
2023-12-08T07:31:01Z
4
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-07T20:01:59Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: suprith777/food_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # suprith777/food_classifier This model is a fine-tuned version of [MariaK/my_food_classifier](https://huggingface.co/MariaK/my_food_classifier) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2052 - Validation Loss: 0.2477 - Train Accuracy: 0.923 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.5118 | 0.3086 | 0.909 | 0 | | 0.3279 | 0.2283 | 0.933 | 1 | | 0.2815 | 0.2315 | 0.929 | 2 | | 0.2294 | 0.2196 | 0.935 | 3 | | 0.2052 | 0.2477 | 0.923 | 4 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.15.0 - Datasets 2.15.0 - Tokenizers 0.13.3
Karen-Teng/my-awesome-text-classification
Karen-Teng
2023-12-08T07:30:59Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T07:30:34Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my-awesome-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-awesome-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3965 - Accuracy: 0.9487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 469 | 2.7427 | 0.7947 | | 3.9521 | 2.0 | 938 | 1.2087 | 0.8933 | | 1.8352 | 3.0 | 1407 | 0.6336 | 0.934 | | 0.8168 | 4.0 | 1876 | 0.4447 | 0.9457 | | 0.4589 | 5.0 | 2345 | 0.3965 | 0.9487 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Abbie-Tsao/my-awesome-text-classification-Abbie
Abbie-Tsao
2023-12-08T07:30:30Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T07:30:20Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my-awesome-text-classification-Abbie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-awesome-text-classification-Abbie This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3958 - Accuracy: 0.946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 469 | 2.7611 | 0.7857 | | 3.9561 | 2.0 | 938 | 1.2236 | 0.8997 | | 1.8505 | 3.0 | 1407 | 0.6344 | 0.932 | | 0.8271 | 4.0 | 1876 | 0.4457 | 0.9413 | | 0.463 | 5.0 | 2345 | 0.3958 | 0.946 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
hles67101/my-awesome-text-classification
hles67101
2023-12-08T07:29:24Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T07:29:05Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my-awesome-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-awesome-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4190 - Accuracy: 0.944 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 469 | 2.7534 | 0.7873 | | 3.9448 | 2.0 | 938 | 1.2438 | 0.8927 | | 1.8574 | 3.0 | 1407 | 0.6677 | 0.926 | | 0.8465 | 4.0 | 1876 | 0.4697 | 0.9403 | | 0.4795 | 5.0 | 2345 | 0.4190 | 0.944 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
cwchang/my-awesome-tx-v2
cwchang
2023-12-08T07:29:09Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T07:28:52Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my-awesome-tx-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-awesome-tx-v2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3863 - Accuracy: 0.9493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 469 | 2.6771 | 0.816 | | 3.9145 | 2.0 | 938 | 1.1744 | 0.901 | | 1.7848 | 3.0 | 1407 | 0.6144 | 0.9333 | | 0.7926 | 4.0 | 1876 | 0.4327 | 0.9477 | | 0.4466 | 5.0 | 2345 | 0.3863 | 0.9493 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Owlcateagle/my-awesome-text-classification
Owlcateagle
2023-12-08T07:27:54Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T07:27:43Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my-awesome-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-awesome-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4472 - Accuracy: 0.942 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 469 | 2.7365 | 0.807 | | 3.9635 | 2.0 | 938 | 1.1976 | 0.9023 | | 1.8301 | 3.0 | 1407 | 0.6323 | 0.923 | | 0.8128 | 4.0 | 1876 | 0.4472 | 0.942 | | 0.4605 | 5.0 | 2345 | 0.4010 | 0.941 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior
multimodalart
2023-12-08T07:26:02Z
4
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:57:24Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior <Gallery /> ## Model description ### These are multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
multimodalart/polipure-repeats-3-steps-500-rank128-with-prior
multimodalart
2023-12-08T07:21:03Z
6
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:47:47Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/polipure-repeats-3-steps-500-rank128-with-prior <Gallery /> ## Model description ### These are multimodalart/polipure-repeats-3-steps-500-rank128-with-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/polipure-repeats-3-steps-500-rank128-with-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-3-steps-500-rank128-with-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('<s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-3-steps-500-rank128-with-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-3-steps-500-rank128-with-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/polipure-repeats-3-steps-500-rank128-with-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03
multimodalart
2023-12-08T07:16:58Z
4
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:58:23Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03 <Gallery /> ## Model description ### These are multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
multimodalart/polipure-repeats-2-steps-500-rank64-with-prior
multimodalart
2023-12-08T07:16:46Z
2
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:44:19Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/polipure-repeats-2-steps-500-rank64-with-prior <Gallery /> ## Model description ### These are multimodalart/polipure-repeats-2-steps-500-rank64-with-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/polipure-repeats-2-steps-500-rank64-with-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-2-steps-500-rank64-with-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('<s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-2-steps-500-rank64-with-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-2-steps-500-rank64-with-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/polipure-repeats-2-steps-500-rank64-with-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior
multimodalart
2023-12-08T07:14:27Z
13
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:54:58Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior <Gallery /> ## Model description ### These are multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
Lew/ppo-SnowballTarget
Lew
2023-12-08T07:10:13Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-12-08T07:10:10Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Lew/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
multimodalart/polipure-repeats-3-steps-500-rank64-no-prior
multimodalart
2023-12-08T07:08:08Z
5
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-12-08T06:49:13Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - multimodalart/polipure-repeats-3-steps-500-rank64-no-prior <Gallery /> ## Model description ### These are multimodalart/polipure-repeats-3-steps-500-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('multimodalart/polipure-repeats-3-steps-500-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-3-steps-500-rank64-no-prior', filename="embeddings.safetensors", repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) image = pipeline('<s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-3-steps-500-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder. - Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-3-steps-500-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder. All [Files & versions](/multimodalart/polipure-repeats-3-steps-500-rank64-no-prior/tree/main). ## Details The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
hahahafofo/Qwen-1_8B-Stable-Diffusion-Prompt
hahahafofo
2023-12-08T07:07:48Z
49
12
transformers
[ "transformers", "safetensors", "qwen", "text-generation", "custom_code", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2023-12-08T07:01:59Z
--- license: apache-2.0 --- 模型使用在生成稳定扩散(Stable Diffusion)的提示语(prompt)。它主要通过中文生成相应的英文提示语,充分利用Qwen的能力,支持包括古诗词在内的多种形式进行提示语的生成。此模型基于35k 条数据进行特定于任务的微调(SFT) 模型的对应代码已发布于GitHub,您可以通过以下链接访问并使用SDXL-Turbo进行快速测试生成的提示语: [https://github.com/zhongpei/Qwen-SDXL-Turbo.git](https://github.com/zhongpei/Qwen-SDXL-Turbo.git) This model is designed to generate prompts for Stable Diffusion. It primarily translates Chinese into corresponding English prompts, leveraging the capabilities of Qwen, and supports a variety of formats including classical poetry for prompt generation. The model has been fine-tuned specifically for this task (SFT) using a dataset of 35k entries. The corresponding code for the model is available on GitHub. You can access and use SDXL-Turbo for rapid test generation of prompts through the following link: [https://github.com/zhongpei/Qwen-SDXL-Turbo.git](https://github.com/zhongpei/Qwen-SDXL-Turbo.git) **参数表格(Parameters Table):** | 参数 | 值 | | -------------- | -------- | | 数据量(Data) | 35k | | 训练周期(Epochs) | 1 | | 学习率(Learning Rate) | 1e-05 | | 损失值(Loss) | 1.03 |
prajapatisarvesh/Reinforce-pixelcopter
prajapatisarvesh
2023-12-08T07:06:37Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-12-08T04:05:24Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 42.60 +/- 31.43 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
traromal/mistralengft
traromal
2023-12-08T06:50:48Z
4
1
null
[ "safetensors", "autotrain", "text-generation", "license:other", "region:us" ]
text-generation
2023-12-07T16:34:21Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
Mattffc4/q-FrozenLake-v1-4x4-noSlippery
Mattffc4
2023-12-08T06:42:15Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-08T06:42:12Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Mattffc4/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
chihun-jang/subCut-2023-1129
chihun-jang
2023-12-08T06:27:56Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-small", "base_model:finetune:klue/roberta-small", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T06:27:32Z
--- base_model: klue/roberta-small tags: - generated_from_trainer metrics: - accuracy model-index: - name: subCut-2023-1129 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # subCut-2023-1129 This model is a fine-tuned version of [klue/roberta-small](https://huggingface.co/klue/roberta-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9784 - Accuracy: 0.4293 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 2.1672 | 1.0 | 15000 | 2.1644 | 0.3806 | | 1.9714 | 2.0 | 30000 | 2.0339 | 0.4136 | | 1.8502 | 3.0 | 45000 | 1.9784 | 0.4293 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1 - Datasets 2.14.4 - Tokenizers 0.13.3
alexkarpekov/dlt_llama2_alpaca
alexkarpekov
2023-12-08T05:29:28Z
2
0
peft
[ "peft", "region:us" ]
null
2023-12-08T05:29:26Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
leejunhyeok/MoMo-70B-LoRA-V1.2_1
leejunhyeok
2023-12-08T05:27:02Z
0
0
transformers
[ "transformers", "en", "license:llama2", "endpoints_compatible", "region:us" ]
null
2023-12-08T04:43:54Z
--- license: llama2 language: - en library_name: transformers --- ## Dataset Details ### Used Datasets - Orca-style dataset - Alpaca-style dataset - No other dataset was used except for the dataset mentioned above - No benchmark test set or the training set are used ## Prompt Template ### Alpaca-style
omersubasi/xlm-roberta-base-finetuned-panx-de
omersubasi
2023-12-08T05:18:53Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-08T05:09:22Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8653353814644136 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1339 - F1: 0.8653 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 | | 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 | | 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.1.0+cu118 - Datasets 1.16.1 - Tokenizers 0.15.0
amehfooz/lora
amehfooz
2023-12-08T05:00:55Z
1
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "dataset:amehfooz/RuwaifaRufy", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-12-08T04:16:16Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: ruwaifarufy tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: false datasets: - amehfooz/RuwaifaRufy --- # LoRA DreamBooth - amehfooz/lora These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. The weights were trained on the concept prompt: ``` ruwaifarufy ``` Use this keyword to trigger your custom model in your prompts. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Usage Make sure to upgrade diffusers to >= 0.19.0: ``` pip install diffusers --upgrade ``` In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark: ``` pip install invisible_watermark transformers accelerate safetensors ``` To just use the base model, you can run: ```python import torch from diffusers import DiffusionPipeline, AutoencoderKL device = "cuda" if torch.cuda.is_available() else "cpu" vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16) pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, variant="fp16", use_safetensors=True ) pipe.to(device) # This is where you load your trained weights specific_safetensors = "pytorch_lora_weights.safetensors" lora_scale = 0.9 pipe.load_lora_weights( 'amehfooz/lora', weight_name = specific_safetensors, # use_auth_token = True ) prompt = "A majestic ruwaifarufy jumping from a big stone at night" image = pipe( prompt=prompt, num_inference_steps=50, cross_attention_kwargs={"scale": lora_scale} ).images[0] ```
bgoel4132/tweet-disaster-classifier
bgoel4132
2023-12-08T04:55:48Z
9
0
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "text-classification", "autonlp", "en", "dataset:bgoel4132/autonlp-data-tweet-disaster-classifier", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - bgoel4132/autonlp-data-tweet-disaster-classifier co2_eq_emissions: 27.22397099134103 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 28716412 - CO2 Emissions (in grams): 27.22397099134103 ## Validation Metrics - Loss: 0.4146720767021179 - Accuracy: 0.8066924731182795 - Macro F1: 0.7835463282531184 - Micro F1: 0.8066924731182795 - Weighted F1: 0.7974252447208724 - Macro Precision: 0.8183917344767431 - Micro Precision: 0.8066924731182795 - Weighted Precision: 0.8005510296861892 - Macro Recall: 0.7679676081852519 - Micro Recall: 0.8066924731182795 - Weighted Recall: 0.8066924731182795 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bgoel4132/autonlp-tweet-disaster-classifier-28716412 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bgoel4132/autonlp-tweet-disaster-classifier-28716412", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bgoel4132/autonlp-tweet-disaster-classifier-28716412", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
msyed123/ppo-LunarLander-v2
msyed123
2023-12-08T04:54:43Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-08T04:54:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 180.88 +/- 107.22 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hungphongtrn/phobert-base-v2-VietMed-v3
hungphongtrn
2023-12-08T04:40:45Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "base_model:vinai/phobert-base-v2", "base_model:finetune:vinai/phobert-base-v2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-08T03:20:31Z
--- base_model: vinai/phobert-base-v2 tags: - generated_from_trainer model-index: - name: phobert-base-v2-VietMed-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phobert-base-v2-VietMed-v3 This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.13.3
KuriT/ppo-Huggy
KuriT
2023-12-08T04:24:29Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-12-08T04:24:22Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: KuriT/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
0xkhairi/out
0xkhairi
2023-12-08T04:24:19Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6", "license:apache-2.0", "region:us" ]
null
2023-12-08T04:00:11Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6 model-index: - name: out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # out This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 150 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.7.1.dev0 - Transformers 4.36.0.dev0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
mjaydenkim/test_trainer
mjaydenkim
2023-12-08T03:53:26Z
2
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:tomh/toxigen_hatebert", "base_model:finetune:tomh/toxigen_hatebert", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T03:27:52Z
--- base_model: tomh/toxigen_hatebert tags: - generated_from_trainer model-index: - name: test_trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [tomh/toxigen_hatebert](https://huggingface.co/tomh/toxigen_hatebert) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.0.post101 - Datasets 2.14.6 - Tokenizers 0.13.3
ujjirox/yi-34b-chat
ujjirox
2023-12-08T03:46:59Z
14
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T03:43:36Z
--- license: other license_name: yi-license license_link: LICENSE widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"> </picture> </br> </br> <div style="display: inline-block;"> <a rel="noopener nofollow" href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml"> <img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg" style="margin: 0 0;"> </a> </div> <div style="display: inline-block;"> <a href="https://huggingface.co/01-ai"> <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-01--ai-blue" style="margin: 0 0;"> </a> </div> <div style="display: inline-block;"> <a rel="noopener nofollow" href="https://www.modelscope.cn/organization/01ai/"> <img src="https://img.shields.io/badge/ModelScope-01--ai-blue" style="margin: 0 0;"> </a> </div> <div style="display: inline-block;"> <a rel="noopener nofollow" href="https://wisemodel.cn/organization/01.AI"> <img src="https://img.shields.io/badge/WiseModel-01--ai-blue" style="margin: 0 0;"> </a> </div> <div style="display: inline-block;"> <a rel="noopener nofollow" href="https://replicate.com/01-ai"> <img src="https://img.shields.io/badge/Replicate-01--ai-blue?logo=data:image/svg%2bxml;base64,PHN2ZyB2ZXJzaW9uPSIxLjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiIHg9IjBweCIgeT0iMHB4IiB2aWV3Qm94PSIwIDAgMTAwMCAxMDAwIiBjbGFzcz0ibG9nbyIgZmlsbD0iY3VycmVudENvbG9yIiB4bWw6c3BhY2U9InByZXNlcnZlIj4KICA8Zz4KICAgIDxwb2x5Z29uIHBvaW50cz0iMTAwMCw0MjcuNiAxMDAwLDU0MC42IDYwMy40LDU0MC42IDYwMy40LDEwMDAgNDc3LDEwMDAgNDc3LDQyNy42IAkiPjwvcG9seWdvbj4KICAgIDxwb2x5Z29uIHBvaW50cz0iMTAwMCwyMTMuOCAxMDAwLDMyNyAzNjQuOCwzMjcgMzY0LjgsMTAwMCAyMzguNCwxMDAwIDIzOC40LDIxMy44IAkiPjwvcG9seWdvbj4KICAgIDxwb2x5Z29uIHBvaW50cz0iMTAwMCwwIDEwMDAsMTEzLjIgMTI2LjQsMTEzLjIgMTI2LjQsMTAwMCAwLDEwMDAgMCwwIAkiPjwvcG9seWdvbj4KICA8L2c+Cjwvc3ZnPg==" style="margin: 0 0;"> </a> </div> <div style="display: inline-block;"> <a rel="noopener nofollow" href="https://github.com/01-ai/Yi/blob/main/LICENSE"> <img src="https://img.shields.io/badge/Code_License-Apache_2.0-lightblue" style="margin: 0 0;"> </a> </div> <div style="display: inline-block;"> <a rel="noopener nofollow" href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt"> <img src="https://img.shields.io/badge/Model_License-Model_Agreement-lightblue" style="margin: 0 0;"> </a> </div> <div style="display: inline-block;"> <a rel="noopener nofollow" href="mailto:oss@01.ai"> <img src="https://img.shields.io/badge/✉️-yi@01.ai-FFE01B" style="margin: 0 0;"> </a> </div> </div> ## Introduction The **Yi** series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). ## News <details open> <summary>🎯 <b>2023/11/23</b>: The chat models are open to public.</summary> This release contains two chat models based on previous released base models, two 8-bits models quantized by GPTQ, two 4-bits models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` You can try some of them interactively at: - [HuggingFace](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Replicate](https://replicate.com/01-ai) </details> <details open> <summary>🔔 <b>2023/11/23</b>: The Yi Series Models Community License Agreement is updated to v2.1.</summary> </details> <details> <summary>🔥 <b>2023/11/08</b>: Invited test of Yi-34B chat model.</summary> Application form: - [English](https://cn.mikecrm.com/l91ODJf) - [Chinese](https://cn.mikecrm.com/gnEZjiQ) </details> <details> <summary>🎯 <b>2023/11/05</b>: The base model of <code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>.</summary> This release contains two base models with the same parameter sizes of previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023/11/02</b>: The base model of <code>Yi-6B</code> and <code>Yi-34B</code>.</summary> The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> ## Model Performance ### Base Model Performance | Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code | | :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: | | | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - | | LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 | | LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 | | Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 | | Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** | | Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 | | InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 | | Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - | | Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 | | Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 | | Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 | | **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 | | Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 | While benchmarking open-source models, we have observed a disparity between the results generated by our pipeline and those reported in public sources (e.g. OpenCompass). Upon conducting a more in-depth investigation of this difference, we have discovered that various models may employ different prompts, post-processing strategies, and sampling techniques, potentially resulting in significant variations in the outcomes. Our prompt and post-processing strategy remains consistent with the original benchmark, and greedy decoding is employed during evaluation without any post-processing for the generated content. For scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. To evaluate the model's capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated. ### Chat Model Performance | Model | MMLU | MMLU | CMMLU | CMMLU | C-Eval(val)<sup>*</sup> | C-Eval(val)<sup>*</sup> | Truthful QA | BBH | BBH | GSM8k | GSM8k | | ----------------------- | --------- | --------- | --------- | --------- | ----------------------- | ----------------------- | ----------- | --------- | --------- | --------- | --------- | | | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 0-shot | 3-shot | 0-shot | 4-shot | | LLaMA2-13B-Chat | 50.88 | 47.33 | 27.47 | 35.08 | 27.93 | 35.88 | 36.84 | 32.90 | 58.22 | 36.85 | 2.73 | | LLaMA2-70B-Chat | 59.42 | 59.86 | 36.10 | 40.99 | 34.99 | 41.31 | 53.95 | 42.36 | 58.53 | 47.08 | 58.68 | | Baichuan2-13B-Chat | 55.09 | 50.14 | 58.64 | 59.47 | 56.02 | 54.75 | 48.98 | 38.81 | 47.15 | 45.72 | 23.28 | | Qwen-14B-Chat | 63.99 | 64.98 | 67.73 | 70.57 | 66.12 | 70.06 | 52.49 | 49.65 | 54.98 | 59.51 | 61.18 | | InternLM-Chat-20B | 55.55 | 57.42 | 53.55 | 53.75 | 51.19 | 53.57 | 51.75 | 42.41 | 36.68 | 15.69 | 43.44 | | AquilaChat2-34B v1.2 | 65.15 | 66.70 | 67.51 | 70.02 | **82.99** | **89.38** | **64.33** | 20.12 | 34.28 | 11.52 | 48.45 | | Yi-6B-Chat | 58.24 | 60.99 | 69.44 | 74.71 | 68.80 | 74.22 | 50.58 | 39.70 | 47.15 | 38.44 | 44.88 | | Yi-6B-Chat-8bits(GPTQ) | 58.29 | 60.96 | 69.21 | 74.69 | 69.17 | 73.85 | 49.85 | 40.35 | 47.26 | 39.42 | 44.88 | | Yi-6B-Chat-4bits(AWQ) | 56.78 | 59.89 | 67.70 | 73.29 | 67.53 | 72.29 | 50.29 | 37.74 | 43.62 | 35.71 | 38.36 | | Yi-34B-Chat | **67.62** | 73.46 | **79.11** | **81.34** | 77.04 | 78.53 | 62.43 | 51.41 | **71.74** | **71.65** | **75.97** | | Yi-34B-Chat-8bits(GPTQ) | 66.24 | **73.69** | 79.05 | 81.23 | 76.82 | 78.97 | 61.84 | **52.08** | 70.97 | 70.74 | 75.74 | | Yi-34B-Chat-4bits(AWQ) | 65.77 | 72.42 | 78.21 | 80.50 | 75.71 | 77.27 | 61.84 | 48.30 | 69.39 | 70.51 | 74.00 | We evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. Generally, the zero-shot approach is more common in chat models. Our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. Some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. <strong>*</strong>: C-Eval results are evaluated on the validation datasets ### Quantized Chat Model Performance We also provide both 4-bit (AWQ) and 8-bit (GPTQ) quantized Yi chat models. Evaluation results on various benchmarks have shown that the quantized models have negligible losses. Additionally, they reduce the memory footprint size. After testing different configurations of prompts and generation lengths, we highly recommend following the guidelines in the memory footprint table below when selecting a device to run our models. | | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-34B-Chat | 65GiB | 68GiB | 76GiB | >80GiB | | Yi-34B-Chat-8bits(GPTQ) | 35GiB | 37GiB | 46GiB | 58GiB | | Yi-34B-Chat-4bits(AWQ) | 19GiB | 20GiB | 30GiB | 40GiB | | Yi-6B-Chat | 12GiB | 13GiB | 15GiB | 18GiB | | Yi-6B-Chat-8bits(GPTQ) | 7GiB | 8GiB | 10GiB | 14GiB | | Yi-6B-Chat-4bits(AWQ) | 4GiB | 5GiB | 7GiB | 10GiB | Note: All the numbers in the table represent the minimum recommended memory for running models of the corresponding size. ### Limitations of Chat Model The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. However, this higher diversity might amplify certain existing issues, including: - **Hallucination**: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning. - **Non-determinism in re-generation**: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions. - **Cumulative Error**: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc. To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as`temperature`,`top_p`, or`top_k`. These adjustments can help in the balance between creativity and coherence in the model's outputs. ## Usage Feel free to [create an issue](https://github.com/01-ai/Yi/issues/new) if you encounter any problem when using the **Yi** series models. ### 1. Prepare development environment #### 1.1 Docker The best approach to try the **Yi** series models is through Docker with GPUs. We provide the following docker images to help you get started. - `registry.lingyiwanwu.com/ci/01-ai/yi:latest` - `ghcr.io/01-ai/yi:latest` Note that the `latest` tag always points to the latest code in the `main` branch. To test a stable version, please replace it with a specific [tag](https://github.com/01-ai/Yi/tags). #### 1.2 Local development environment We use [`conda-lock`](https://github.com/conda/conda-lock) to generate fully reproducible lock files for conda environments. You can refer to [conda-lock.yml](./conda-lock.yml) for the exact versions of the dependencies. Additionally, we utilize [`micromamba`](https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html) for installing these dependencies. To install the dependencies, please follow these steps: 1. Install `micromamba` by following the instructions available [here](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html). 2. Execute `micromamba install -y -n yi -f conda-lock.yml` to create a conda environment named `yi` and install the necessary dependencies. ### 2. Download the model (optional) By default, the model weights and tokenizer will be downloaded from [HuggingFace](https://huggingface.co/01-ai) automatically in the next step. You can also download them manually from the following places: - [ModelScope](https://www.modelscope.cn/organization/01ai/) - [WiseModel](https://wisemodel.cn/organization/01.AI) ### 3. Examples #### 3.1 Use the chat model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '01-ai/Yi-34b-Chat' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) # Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM. model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ``` To construct the prompt template manually, you can refer the `chat_template` field in the `tokenizer_config.json` [file](https://huggingface.co/01-ai/Yi-34B-Chat/blob/main/tokenizer_config.json#L60). ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` #### 3.2 Use the base model ```bash python demo/text_generation.py ``` To reuse the downloaded models in the previous step, you can provide the extra `--model` argument: ```bash python demo/text_generation.py --model /path/to/model ``` Or if you'd like to get your hands dirty: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34B", device_map="auto", torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34B") inputs = tokenizer("There's a place where time stands still. A place of breath taking wonder, but also", return_tensors="pt") max_length = 256 outputs = model.generate( inputs.input_ids.cuda(), max_length=max_length, eos_token_id=tokenizer.eos_token_id, do_sample=True, repetition_penalty=1.3, no_repeat_ngram_size=5, temperature=0.7, top_k=40, top_p=0.8, ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` <details> <summary>Output</summary> **Prompt**: There's a place where time stands still. A place of breath taking wonder, but also **Generation**: There's a place where time stands still. A place of breath taking wonder, but also of great danger. A place where the very air you breathe could kill you. A place where the only way to survive is to be prepared. The place is called the Arctic. The Arctic is a vast, frozen wilderness. It is a place of extremes. The temperatures can drop to -40 degrees Celsius. The winds can reach speeds of 100 kilometers per hour. The sun can shine for 24 hours a day, or not at all for weeks on end. The Arctic is also a place of great beauty. The ice and snow are a pristine white. The sky is a deep blue. The sunsets are spectacular. But the Arctic is also a place of great danger. The ice can be treacherous. The winds can be deadly. The sun can be blinding. The Arctic is a place where the only way to survive is to be prepared. The Arctic is a place of extremes. The temperatures can drop to -40 degrees Celsius. The winds can reach speeds of 100 kilometers per hour. The sun can shine for 24 hours a day, or not at all for weeks on end. The Arctic is a place of great beauty. The ice and snow are a </details> For more advanced usage, please refer to the [doc](https://github.com/01-ai/Yi/tree/main/demo). #### 3.3 Finetuning from the base model: ```bash bash finetune/scripts/run_sft_Yi_6b.sh ``` Once finished, you can compare the finetuned model and the base model with the following command: ```bash bash finetune/scripts/run_eval.sh ``` For more advanced usage like fine-tuning based on your custom data, please refer the [doc](https://github.com/01-ai/Yi/tree/main/finetune). #### 3.4 Quantization ##### GPT-Q ```bash python quantization/gptq/quant_autogptq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/gptq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` For a more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/gptq) ##### AWQ ```bash python quantization/awq/quant_autoawq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/awq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/awq) ## Ecosystem 🤗 You are encouraged to create a PR and share your awesome work built on top of the Yi series models. - Serving - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): Efficiently run Yi models locally. - Quantization - [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF) - [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ) - Finetuning - [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B) ## FAQ 1. **What dataset was this trained with?** The dataset we use contains Chinese & English only. We used approximately 3T tokens. The detailed number and its construction will be described in the upcoming technical report. ## Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. ## License The source code in this repo is licensed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). The Yi series models are fully open for academic research and free commercial usage with permission via applications. All usage must adhere to the [Model License Agreement 2.0](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt). To apply for the official commercial license, please contact us ([yi@01.ai](mailto:yi@01.ai)).
dvaleriani/rl_course
dvaleriani
2023-12-08T03:42:50Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-08T02:12:21Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO-lr=1e-4 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 276.94 +/- 26.88 name: mean_reward verified: false --- # **PPO-lr=1e-4** Agent playing **LunarLander-v2** This is a trained model of a **PPO-lr=1e-4** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
stockmark/bart-base-japanese-news
stockmark
2023-12-08T03:39:50Z
81
10
transformers
[ "transformers", "pytorch", "tf", "safetensors", "bart", "text2text-generation", "ja", "japanese", "lm", "nlp", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-20T04:23:07Z
--- language: ja tags: - ja - japanese - bart - lm - nlp license: mit --- # bart-base-japanese-news(base-sized model) This repository provides a Japanese BART model. The model was trained by [Stockmark Inc.](https://stockmark.co.jp) An introductory article on the model can be found at the following URL. [https://tech.stockmark.co.jp/blog/bart-japanese-base-news/](https://tech.stockmark.co.jp/blog/bart-japanese-base-news/) ## Model description BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. # How to use the model *NOTE:* Since we are using a custom tokenizer, please use `trust_remote_code=True` to initialize the tokenizer. ## Simple use ```python from transformers import AutoTokenizer, BartModel model_name = "stockmark/bart-base-japanese-news" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = BartModel.from_pretrained(model_name) inputs = tokenizer("今日は良い天気です。", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ## Sentence Permutation ```python import torch from transformers import AutoTokenizer, BartForConditionalGeneration model_name = "stockmark/bart-base-japanese-news" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = BartForConditionalGeneration.from_pretrained(model_name) if torch.cuda.is_available(): model = model.to("cuda") # correct order text is "明日は大雨です。電車は止まる可能性があります。ですから、自宅から働きます。" text = "電車は止まる可能性があります。ですから、自宅から働きます。明日は大雨です。" inputs = tokenizer([text], max_length=128, return_tensors="pt", truncation=True) text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, max_length=128) output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(output) # sample output: 明日は大雨です。電車は止まる可能性があります。ですから、自宅から働きます。 ``` ## Mask filling ```python import torch from transformers import AutoTokenizer, BartForConditionalGeneration model_name = "stockmark/bart-base-japanese-news" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = BartForConditionalGeneration.from_pretrained(model_name) if torch.cuda.is_available(): model = model.to("cuda") text = "今日の天気は<mask>のため、傘が必要でしょう。" inputs = tokenizer([text], max_length=128, return_tensors="pt", truncation=True) text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, max_length=128) output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(output) # sample output: 今日の天気は、雨のため、傘が必要でしょう。 ``` ## Text generation *NOTE:* You can use the raw model for text generation. However, the model is mostly meant to be fine-tuned on a supervised dataset. ```python import torch from transformers import AutoTokenizer, BartForConditionalGeneration model_name = "stockmark/bart-base-japanese-news" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = BartForConditionalGeneration.from_pretrained(model_name) if torch.cuda.is_available(): model = model.to("cuda") text = "自然言語処理(しぜんげんごしょり、略称:NLP)は、人間が日常的に使っている自然言語をコンピュータに処理させる一連の技術であり、人工知能と言語学の一分野である。「計算言語学」(computational linguistics)との類似もあるが、自然言語処理は工学的な視点からの言語処理をさすのに対して、計算言語学は言語学的視点を重視する手法をさす事が多い。" inputs = tokenizer([text], max_length=512, return_tensors="pt", truncation=True) text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, min_length=0, max_length=40) output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(output) # sample output: 自然言語処理(しぜんげんごしょり、略称:NLP)は、人間が日常的に使っている自然言語をコンピュータに処理させる一連の技術であり、言語学の一分野である。 ``` # Training The model was trained on Japanese News Articles. # Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script. # Licenses The pretrained models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php). *NOTE:* Only tokenization_bart_japanese_news.py is [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). Please see tokenization_bart_japanese_news.py for license details. # Contact If you have any questions, please contact us using [our contact form](https://stockmark.co.jp/contact). # Acknowledgement This comparison study supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
brettbbb/mc_cot_16
brettbbb
2023-12-08T03:22:39Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.5", "base_model:finetune:lmsys/vicuna-7b-v1.5", "license:llama2", "region:us" ]
null
2023-12-08T03:07:05Z
--- license: llama2 base_model: lmsys/vicuna-7b-v1.5 tags: - generated_from_trainer model-index: - name: mc_cot_16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mc_cot_16 This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.0303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1538 | 1.0 | 4 | 2.1552 | | 1.7905 | 2.0 | 8 | 1.9012 | | 1.4229 | 3.0 | 12 | 1.7686 | | 1.3834 | 4.0 | 16 | 1.7070 | | 1.4421 | 5.0 | 20 | 1.7308 | | 0.9003 | 6.0 | 24 | 1.7646 | | 0.7013 | 7.0 | 28 | 1.9070 | | 0.6291 | 8.0 | 32 | 2.0078 | | 0.3314 | 9.0 | 36 | 2.2682 | | 0.1554 | 10.0 | 40 | 2.3624 | | 0.0814 | 11.0 | 44 | 2.6523 | | 0.0499 | 12.0 | 48 | 2.7565 | | 0.0216 | 13.0 | 52 | 2.8505 | | 0.0197 | 14.0 | 56 | 2.9170 | | 0.0174 | 15.0 | 60 | 2.9433 | | 0.0174 | 16.0 | 64 | 2.9683 | | 0.0145 | 17.0 | 68 | 2.9966 | | 0.013 | 18.0 | 72 | 3.0193 | | 0.0151 | 19.0 | 76 | 3.0277 | | 0.0145 | 20.0 | 80 | 3.0303 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.13.1 - Tokenizers 0.14.1
Nurmukhamed/distilbert-base-uncased-finetuned-imdb
Nurmukhamed
2023-12-08T03:22:26Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-12-08T03:17:57Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4906 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6819 | 1.0 | 157 | 2.4978 | | 2.5872 | 2.0 | 314 | 2.4488 | | 2.525 | 3.0 | 471 | 2.4836 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0