modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-05 00:41:53
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
539 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-05 00:41:31
card
stringlengths
11
1.01M
swl-models/XiangZhang-hans-v29
swl-models
2023-12-09T22:49:24Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-09T22:48:21Z
--- license: creativeml-openrail-m ---
Myashka/gpt-imdb-alpha_0.3-beta_0.1
Myashka
2023-12-09T22:45:39Z
9
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:lvwerra/gpt2-imdb", "base_model:finetune:lvwerra/gpt2-imdb", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-06T22:22:11Z
--- base_model: lvwerra/gpt2-imdb tags: - generated_from_trainer model-index: - name: gpt-imdb-alpha_0.3-beta_0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-imdb-alpha_0.3-beta_0.1 This model is a fine-tuned version of [lvwerra/gpt2-imdb](https://huggingface.co/lvwerra/gpt2-imdb) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 25.4567 - Rewards/chosen: -0.2859 - Rewards/rejected: -1.2893 - Rewards/accuracies: 0.8458 - Rewards/margins: 1.0034 - Logps/rejected: -276.5780 - Logps/chosen: -238.1245 - Logits/rejected: -31.6823 - Logits/chosen: -32.1973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 150 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.3872 | 0.21 | 500 | 0.9032 | -0.0063 | -0.4921 | 0.7833 | 0.4858 | -268.6066 | -235.3286 | -32.2910 | -32.9554 | | 0.937 | 0.42 | 1000 | 0.5782 | 0.3739 | -0.2273 | 0.7667 | 0.6012 | -265.9586 | -231.5264 | -33.2571 | -33.9060 | | 1.6799 | 0.63 | 1500 | 3.1537 | 0.2527 | -0.4167 | 0.7729 | 0.6694 | -267.8524 | -232.7385 | -33.1089 | -33.5974 | | 0.8141 | 0.83 | 2000 | 1.8978 | 0.1800 | -0.6646 | 0.7917 | 0.8446 | -270.3312 | -233.4657 | -32.3310 | -32.9275 | | 0.4758 | 1.04 | 2500 | 7.5225 | 0.0635 | -0.8693 | 0.8188 | 0.9329 | -272.3785 | -234.6298 | -32.0571 | -32.5700 | | 0.5184 | 1.25 | 3000 | 2.2710 | 0.3736 | -0.5136 | 0.8021 | 0.8872 | -268.8213 | -231.5289 | -33.9791 | -34.4883 | | 0.3571 | 1.46 | 3500 | 12.0724 | 0.0389 | -0.9119 | 0.8125 | 0.9507 | -272.8040 | -234.8766 | -32.0986 | -32.6149 | | 1.8478 | 1.67 | 4000 | 14.8072 | 0.0021 | -0.9754 | 0.8229 | 0.9775 | -273.4396 | -235.2442 | -32.4363 | -32.9745 | | 0.6874 | 1.88 | 4500 | 5.9952 | 0.0487 | -0.9284 | 0.8167 | 0.9771 | -272.9694 | -234.7781 | -32.9101 | -33.4694 | | 0.2233 | 2.08 | 5000 | 11.0797 | -0.2853 | -1.2611 | 0.8479 | 0.9758 | -276.2962 | -238.1182 | -31.8450 | -32.3602 | | 0.1784 | 2.29 | 5500 | 7.9899 | -0.1567 | -1.1325 | 0.8375 | 0.9757 | -275.0099 | -236.8327 | -32.0292 | -32.5741 | | 0.2919 | 2.5 | 6000 | 29.0523 | -0.3295 | -1.3283 | 0.8500 | 0.9988 | -276.9686 | -238.5604 | -31.4315 | -31.9371 | | 2.011 | 2.71 | 6500 | 28.3221 | -0.2974 | -1.3018 | 0.8458 | 1.0044 | -276.7031 | -238.2393 | -31.6565 | -32.1763 | | 1.7899 | 2.92 | 7000 | 25.4567 | -0.2859 | -1.2893 | 0.8458 | 1.0034 | -276.5780 | -238.1245 | -31.6823 | -32.1973 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1 - Datasets 2.15.0 - Tokenizers 0.15.0
jrajan/pointling1.5
jrajan
2023-12-09T22:40:11Z
0
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-09T21:54:47Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: A photo of a pntlng animation tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - jrajan/pointling1.5 This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on A photo of a pntlng animation using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) DreamBooth for the text encoder was enabled: False.
TheBloke/go-bruins-AWQ
TheBloke
2023-12-09T22:39:49Z
9
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Intel/orca_dpo_pairs", "license:mit", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2023-12-09T22:20:12Z
--- base_model: rwitz/go-bruins datasets: - Intel/orca_dpo_pairs inference: false language: - en license: mit model_creator: Ryan Witzman model_name: Go Bruins model_type: mistral pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Go Bruins - AWQ - Model creator: [Ryan Witzman](https://huggingface.co/rwitz) - Original model: [Go Bruins](https://huggingface.co/rwitz/go-bruins) <!-- description start --> ## Description This repo contains AWQ model files for [Ryan Witzman's Go Bruins](https://huggingface.co/rwitz/go-bruins). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/go-bruins-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/go-bruins-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/go-bruins-GGUF) * [Ryan Witzman's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/rwitz/go-bruins) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/go-bruins-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/go-bruins-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `go-bruins-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/go-bruins-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''{prompt} ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/go-bruins-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/go-bruins-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/go-bruins-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Ryan Witzman's Go Bruins ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63a259d0f30c46422789d38d/vO3iATjO8ulfcakTltE4k.png) # Go Bruins - A Fine-tuned Language Model ## Updates December 9, 2023: Go-Bruins has placed **#6** overall and **#1** for 7 billion parameter models on the [Hugging Face Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)! ## Overview **Go Bruins** is a state-of-the-art language model fine-tuned on the Q-bert/MetaMath-Cybertron-Starling architecture. It's designed to push the boundaries of NLP applications, offering unparalleled performance in generating human-like text. ## Model Details - **Developer:** Ryan Witzman - **Base Model:** [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) - **Fine-tuning Method:** Direct Preference Optimization (DPO) - **Training Steps:** 200 - **Language:** English - **License:** MIT ## Capabilities Go Bruins excels in a variety of NLP tasks, including but not limited to: - Text generation - Language understanding - Sentiment analysis ## Usage **Warning:** This model may output NSFW or illegal content. Use with caution and at your own risk. ### For Direct Use: ```python from transformers import pipeline model_name = "rwitz/go-bruins" inference_pipeline = pipeline('text-generation', model=model_name) input_text = "Your input text goes here" output = inference_pipeline(input_text) print(output) ``` GGUF Quantized Files are Located at [NyxKrage/go-bruins-GGUF](https://huggingface.co/NyxKrage/go-bruins-GGUF) ### Not Recommended For: - Illegal activities - Harassment - Professional advice or crisis situations ## Training and Evaluation Trained on a dataset from [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs), Go Bruins has shown promising improvements over its predecessor, Q-Bert. # Evaluations Go-Bruins is the SOTA 7B model. | Metric | Average | Arc Challenge | Hella Swag | MMLU | Truthful Q&A | Winogrande | GSM8k | |---------------|---------|---------------|------------|------|--------------|------------|-------| | **Score** | 71.86 | 69.11 | 86.53| 65.02 | 59.24 | 81.37 | 69.90 | Note: The original MMLU evaluation has been corrected to include 5-shot data rather than 1-shot data. ## Contact For any inquiries or feedback, reach out to Ryan Witzman on Discord: `rwitz_`. --- *This model card was created with care by Ryan Witzman.*
chriss1245/atari_space_invaders
chriss1245
2023-12-09T22:26:48Z
5
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T22:26:17Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 255.50 +/- 42.80 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga chriss1245 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga chriss1245 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga chriss1245 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 10000), ('n_timesteps', 100000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
carlpersson/whisper-small-de
carlpersson
2023-12-09T22:25:33Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-12-09T22:24:18Z
--- language: - hi license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Small De results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small De This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.25e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
johnowhitaker/lora_pn05_036sim
johnowhitaker
2023-12-09T22:19:27Z
4
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:playgroundai/playground-v2-1024px-aesthetic", "base_model:adapter:playgroundai/playground-v2-1024px-aesthetic", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-12-09T20:47:08Z
--- license: creativeml-openrail-m base_model: playgroundai/playground-v2-1024px-aesthetic dataset: None tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - johnowhitaker/lora_pn05_036sim These are LoRA adaption weights for playgroundai/playground-v2-1024px-aesthetic. The weights were fine-tuned on a custom dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
gurpreetmukker/a2c-PandaReachDense-v3
gurpreetmukker
2023-12-09T21:55:39Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T21:51:12Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.19 +/- 0.12 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
domenicrosati/deberta-v3-large-survey-new_fact_main_passage-rater-half-gpt4
domenicrosati
2023-12-09T21:53:42Z
5
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-09T02:16:16Z
--- license: mit tags: - generated_from_trainer model-index: - name: deberta-v3-large-survey-new_fact_main_passage-rater-half-gpt4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-survey-new_fact_main_passage-rater-half-gpt4 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6360 - Krippendorff: 0.8330 - Spearman: 0.8242 - Absolute Agreement: 0.8498 - Agreement Within One: 0.9435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Krippendorff | Spearman | Absolute Agreement | Agreement Within One | |:-------------:|:-----:|:----:|:---------------:|:------------:|:--------:|:------------------:|:--------------------:| | No log | 1.0 | 52 | 1.9056 | -0.1975 | 0.2732 | 0.1389 | 0.5556 | | No log | 2.0 | 104 | 1.8986 | -0.3052 | 0.0684 | 0.1389 | 0.5556 | | No log | 3.0 | 156 | 1.8869 | -0.3835 | nan | 0.1389 | 0.5278 | | No log | 4.0 | 208 | 2.0502 | -0.4830 | nan | 0.25 | 1.0 | | No log | 5.0 | 260 | 2.4002 | -0.4830 | nan | 0.25 | 1.0 | | No log | 6.0 | 312 | 2.4895 | -0.0731 | 0.0193 | 0.2222 | 0.8889 | | No log | 7.0 | 364 | 2.2943 | 0.7464 | 0.7453 | 0.3889 | 0.8889 | | No log | 8.0 | 416 | 2.2416 | 0.8118 | 0.7882 | 0.3889 | 0.8611 | | No log | 9.0 | 468 | 2.3725 | 0.8118 | 0.7882 | 0.3889 | 0.8611 | | 1.2529 | 10.0 | 520 | 2.4889 | 0.6015 | 0.6308 | 0.3333 | 0.7778 | | 1.2529 | 11.0 | 572 | 2.7181 | 0.6015 | 0.6308 | 0.3333 | 0.7778 | | 1.2529 | 12.0 | 624 | 2.5870 | 0.6829 | 0.6950 | 0.3611 | 0.8056 | | 1.2529 | 13.0 | 676 | 2.7559 | 0.6400 | 0.6437 | 0.3611 | 0.8056 | | 1.2529 | 14.0 | 728 | 2.8805 | 0.6829 | 0.6950 | 0.3611 | 0.8056 | | 1.2529 | 15.0 | 780 | 2.7286 | 0.6814 | 0.6549 | 0.3889 | 0.8333 | | 1.2529 | 16.0 | 832 | 3.0974 | 0.7322 | 0.7024 | 0.3889 | 0.8333 | | 1.2529 | 17.0 | 884 | 3.2368 | 0.6794 | 0.6160 | 0.2778 | 0.8056 | | 1.2529 | 18.0 | 936 | 3.1932 | 0.6414 | 0.5821 | 0.25 | 0.7778 | | 1.2529 | 19.0 | 988 | 3.3644 | 0.7016 | 0.6197 | 0.3056 | 0.8333 | | 0.4102 | 20.0 | 1040 | 3.6545 | 0.6146 | 0.5778 | 0.25 | 0.75 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.12.1
kumar9/word-auto-filled
kumar9
2023-12-09T21:23:05Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "code", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-07T01:59:08Z
--- license: apache-2.0 tags: - code ---
annabellehuether/unsummarized-partisan-legal-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_01wd
annabellehuether
2023-12-09T21:18:17Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:nlpaueb/legal-bert-base-uncased", "base_model:finetune:nlpaueb/legal-bert-base-uncased", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-09T20:39:42Z
--- license: cc-by-sa-4.0 base_model: nlpaueb/legal-bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: unsummarized-partisan-legal-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_01wd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unsummarized-partisan-legal-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_01wd This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5720 - Accuracy: 0.6867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 7 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5666 | 1.0 | 660 | 0.5456 | 0.6644 | | 0.514 | 2.0 | 1320 | 0.5460 | 0.6852 | | 0.4584 | 3.0 | 1980 | 0.5720 | 0.6867 | ### Framework versions - Transformers 4.35.1 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
malteos/gpt2-uk
malteos
2023-12-09T21:08:48Z
196
2
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "uk", "dataset:oscar", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-04-02T13:15:27Z
--- license: mit datasets: - oscar language: - uk library_name: transformers pipeline_tag: text-generation --- # GPT2 Ukrainian A generative language model for the Ukrainian language follows the [GPT-2 architecture](https://huggingface.co/gpt2) (124M parameters). - hidden size: 768 - number of heads: 12 - number of layers: 12 - seq length: 1024 - tokens: 11238113280 (3 epochs) - steps: 57167 ## Training data - OSCAR - Wikimedia dumps ## License MIT
oracool/myspace
oracool
2023-12-09T21:07:03Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:d0rj/rut5-base-summ", "base_model:finetune:d0rj/rut5-base-summ", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-09T00:54:19Z
--- base_model: d0rj/rut5-base-summ tags: - generated_from_trainer metrics: - rouge model-index: - name: myspace results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # myspace This model is a fine-tuned version of [d0rj/rut5-base-summ](https://huggingface.co/d0rj/rut5-base-summ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9404 - Rouge1: 0.29 - Rouge2: 0.1344 - Rougel: 0.2793 - Rougelsum: 0.2798 - Gen Len: 80.3965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.1069 | 1.0 | 3444 | 1.9404 | 0.29 | 0.1344 | 0.2793 | 0.2798 | 80.3965 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
daniel-gordon/bioBERT-finetuned-biopharma-dive
daniel-gordon
2023-12-09T21:02:00Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:daniel-gordon/bioBERT-finetuned-financial-phrasebank", "base_model:finetune:daniel-gordon/bioBERT-finetuned-financial-phrasebank", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-09T20:58:27Z
--- base_model: daniel-gordon/bioBERT-finetuned-financial-phrasebank tags: - generated_from_trainer metrics: - f1 model-index: - name: bioBERT-finetuned-biopharma-dive results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bioBERT-finetuned-biopharma-dive This model is a fine-tuned version of [daniel-gordon/bioBERT-finetuned-financial-phrasebank](https://huggingface.co/daniel-gordon/bioBERT-finetuned-financial-phrasebank) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9018 - F1: 0.6009 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 110 | 0.9018 | 0.6009 | | No log | 2.0 | 220 | 0.9249 | 0.5851 | | No log | 3.0 | 330 | 1.0173 | 0.5835 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
bodam/model_new2
bodam
2023-12-09T20:43:36Z
0
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-12-09T20:35:49Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a olis chair tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - bodam/model_new2 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a olis chair using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
Schmadge/mamba-slim-orca
Schmadge
2023-12-09T20:41:24Z
7
5
transformers
[ "transformers", "pytorch", "tensorboard", "dataset:Open-Orca/SlimOrca", "arxiv:2312.00752", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-12-08T07:16:05Z
--- license: apache-2.0 datasets: - Open-Orca/SlimOrca --- # Instruction-Tuned Mamba 2.8B on SlimOrca Dataset ## Overview This repository features the [2.8 billion parameter Mamba model](https://huggingface.co/state-spaces/mamba-2.8b), fine-tuned on a subset (20k) of the [SlimOrca dataset](https://huggingface.co/datasets/Open-Orca/SlimOrca). Big thanks to Justin Mattern from Haven for contributing essential code in the [mamba-chat repository](https://github.com/havenhq/mamba-chat) ## Usage Instructions To utilize the fine-tuned model, follow the Python code snippet below: ```python import torch from transformers import AutoTokenizer from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel device = "cuda" tokenizer = AutoTokenizer.from_pretrained("Schmadge/mamba-slim-orca") tokenizer.eos_token = tokenizer.pad_token = "<|endoftext|>" tokenizer.chat_template = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-beta").chat_template model = MambaLMHeadModel.from_pretrained("Schmadge/mamba-slim-orca", device=device, dtype=torch.float16) def generate_response(system_prompt, user_prompt): # Preparing the prompt prompt = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt} ] input_ids = tokenizer.apply_chat_template(prompt, return_tensors="pt", add_generation_prompt=True).to(device) # Generating the response out = model.generate(input_ids=input_ids, max_length=2000, temperature=0.3, top_p=0.7, eos_token_id=tokenizer.eos_token_id) decoded = tokenizer.batch_decode(out) return decoded[0].split("<|assistant|>\n")[-1].replace('<|endoftext|>','') system_prompt = "You are an AI assistant. Provide a detailed answer so user don't need to search outside to understand the answer." user_prompt = "In a room I have only 3 sisters. Anna is reading a book. Alice is playing a match of chess.What the third sister, Amanda is doing ?" response = generate_response(system_prompt, user_prompt) print(response) #Based on the information provided, we can infer that Amanda is playing a match of chess with Alice. Since Anna is reading a book, it is reasonable to assume that Amanda is playing a game of chess with Alice, as this is a common activity for siblings to engage in together. ``` ## Refrences: Mamba Chat: ```bibtex @misc{haven2023mambachat, title = {Mamba-Chat}, author = {Justus Mattern and Konstantin Hohr}, year = {2023}, howpublished = {GitHub}, url = {https://github.com/havenhq/mamba-chat} } ``` Mamba: ```bibtex @article{mamba, title={Mamba: Linear-Time Sequence Modeling with Selective State Spaces}, author={Gu, Albert and Dao, Tri}, journal={arXiv preprint arXiv:2312.00752}, year={2023} } ``` SlimOrca: ```bibtex @misc{SlimOrca, title = {SlimOrca: An Open Dataset of GPT-4 Augmented FLAN Reasoning Traces, with Verification}, author = {Wing Lian and others}, year = {2023}, publisher = {HuggingFace}, url = {https://huggingface.co/Open-Orca/SlimOrca} } ```
ThuyNT03/KLTN_COQE_viT5_SAPOL_v6
ThuyNT03
2023-12-09T20:20:20Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-09T18:09:55Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: KLTN_COQE_viT5_SAPOL_v6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # KLTN_COQE_viT5_SAPOL_v6 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1
ThuyNT03/KLTN_COQE_viT5_POASL_v6
ThuyNT03
2023-12-09T20:20:06Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-09T18:22:49Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: KLTN_COQE_viT5_POASL_v6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # KLTN_COQE_viT5_POASL_v6 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
ThuyNT03/KLTN_COQE_viT5_SOAPL_v6
ThuyNT03
2023-12-09T20:19:10Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "base_model:finetune:VietAI/vit5-large", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-09T18:13:58Z
--- license: mit base_model: VietAI/vit5-large tags: - generated_from_trainer model-index: - name: KLTN_COQE_viT5_SOAPL_v6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # KLTN_COQE_viT5_SOAPL_v6 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1
joseluhf11/sign_language_classification_v1
joseluhf11
2023-12-09T20:15:35Z
37
2
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-09T19:12:42Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: sign_language_classification_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sign_language_classification_v1 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3445 - Accuracy: 0.8056 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.2889 | 1.0 | 8 | 3.2714 | 0.0556 | | 3.2492 | 2.0 | 16 | 3.2615 | 0.125 | | 3.2263 | 3.0 | 24 | 3.2034 | 0.125 | | 3.1271 | 4.0 | 32 | 3.1297 | 0.2083 | | 2.9592 | 5.0 | 40 | 3.0655 | 0.2639 | | 2.9414 | 6.0 | 48 | 2.9282 | 0.3472 | | 2.7337 | 7.0 | 56 | 2.8254 | 0.4028 | | 2.6683 | 8.0 | 64 | 2.6909 | 0.4583 | | 2.5837 | 9.0 | 72 | 2.5904 | 0.5417 | | 2.4566 | 10.0 | 80 | 2.5380 | 0.5833 | | 2.2188 | 11.0 | 88 | 2.4682 | 0.5417 | | 2.2885 | 12.0 | 96 | 2.3196 | 0.5833 | | 2.005 | 13.0 | 104 | 2.2824 | 0.6667 | | 1.9293 | 14.0 | 112 | 2.1967 | 0.6389 | | 1.8396 | 15.0 | 120 | 2.0287 | 0.7361 | | 1.7066 | 16.0 | 128 | 2.0357 | 0.7361 | | 1.6911 | 17.0 | 136 | 1.9670 | 0.7361 | | 1.6285 | 18.0 | 144 | 1.9186 | 0.7361 | | 1.6064 | 19.0 | 152 | 1.9239 | 0.6944 | | 1.6067 | 20.0 | 160 | 1.7723 | 0.7778 | | 1.4094 | 21.0 | 168 | 1.7701 | 0.75 | | 1.4664 | 22.0 | 176 | 1.7453 | 0.75 | | 1.3255 | 23.0 | 184 | 1.7103 | 0.7083 | | 1.3253 | 24.0 | 192 | 1.7216 | 0.7778 | | 1.2416 | 25.0 | 200 | 1.5770 | 0.7778 | | 1.1696 | 26.0 | 208 | 1.5099 | 0.7917 | | 1.1645 | 27.0 | 216 | 1.4630 | 0.7917 | | 1.0646 | 28.0 | 224 | 1.4989 | 0.7917 | | 1.0149 | 29.0 | 232 | 1.5569 | 0.7222 | | 1.0799 | 30.0 | 240 | 1.3602 | 0.8333 | | 0.9528 | 31.0 | 248 | 1.3782 | 0.8472 | | 1.0461 | 32.0 | 256 | 1.3698 | 0.8333 | | 0.9019 | 33.0 | 264 | 1.3251 | 0.8611 | | 0.9494 | 34.0 | 272 | 1.3586 | 0.8472 | | 0.9439 | 35.0 | 280 | 1.3526 | 0.8333 | | 0.9089 | 36.0 | 288 | 1.2728 | 0.8333 | | 0.8962 | 37.0 | 296 | 1.3006 | 0.7917 | | 0.9482 | 38.0 | 304 | 1.2592 | 0.8611 | | 0.8804 | 39.0 | 312 | 1.3527 | 0.7778 | | 0.8348 | 40.0 | 320 | 1.2759 | 0.8056 | | 0.7823 | 41.0 | 328 | 1.3071 | 0.8194 | | 0.8944 | 42.0 | 336 | 1.2428 | 0.8194 | | 0.9677 | 43.0 | 344 | 1.2903 | 0.7778 | | 0.9584 | 44.0 | 352 | 1.3119 | 0.7639 | | 0.8342 | 45.0 | 360 | 1.3502 | 0.7778 | | 0.7878 | 46.0 | 368 | 1.1941 | 0.8333 | | 0.7817 | 47.0 | 376 | 1.2670 | 0.8056 | | 0.812 | 48.0 | 384 | 1.2068 | 0.8194 | | 0.9714 | 49.0 | 392 | 1.3480 | 0.75 | | 0.9362 | 50.0 | 400 | 1.4028 | 0.7083 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
LoneStriker/go-bruins-3.0bpw-h6-exl2-2
LoneStriker
2023-12-09T20:11:36Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Intel/orca_dpo_pairs", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T20:06:51Z
--- license: mit datasets: - Intel/orca_dpo_pairs language: - en pipeline_tag: text-generation --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63a259d0f30c46422789d38d/vO3iATjO8ulfcakTltE4k.png) # Go Bruins - A Fine-tuned Language Model ## Updates December 9, 2023: Go-Bruins has placed **#6** overall and **#1** for 7 billion parameter models on the [Hugging Face Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)! ## Overview **Go Bruins** is a state-of-the-art language model fine-tuned on the Q-bert/MetaMath-Cybertron-Starling architecture. It's designed to push the boundaries of NLP applications, offering unparalleled performance in generating human-like text. ## Model Details - **Developer:** Ryan Witzman - **Base Model:** [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) - **Fine-tuning Method:** Direct Preference Optimization (DPO) - **Training Steps:** 200 - **Language:** English - **License:** MIT ## Capabilities Go Bruins excels in a variety of NLP tasks, including but not limited to: - Text generation - Language understanding - Sentiment analysis ## Usage **Warning:** This model may output NSFW or illegal content. Use with caution and at your own risk. ### For Direct Use: ```python from transformers import pipeline model_name = "rwitz/go-bruins" inference_pipeline = pipeline('text-generation', model=model_name) input_text = "Your input text goes here" output = inference_pipeline(input_text) print(output) ``` GGUF Quantized Files are Located at [NyxKrage/go-bruins-GGUF](https://huggingface.co/NyxKrage/go-bruins-GGUF) ### Not Recommended For: - Illegal activities - Harassment - Professional advice or crisis situations ## Training and Evaluation Trained on a dataset from [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs), Go Bruins has shown promising improvements over its predecessor, Q-Bert. # Evaluations Go-Bruins is the SOTA 7B model. | Metric | Average | Arc Challenge | Hella Swag | MMLU | Truthful Q&A | Winogrande | GSM8k | |---------------|---------|---------------|------------|------|--------------|------------|-------| | **Score** | 71.86 | 69.11 | 86.53| 65.02 | 59.24 | 81.37 | 69.90 | Note: The original MMLU evaluation has been corrected to include 5-shot data rather than 1-shot data. ## Contact For any inquiries or feedback, reach out to Ryan Witzman on Discord: `rwitz_`. --- *This model card was created with care by Ryan Witzman.*
agvidit1/TinyBert-Berttoken-Toxigen-pretrain
agvidit1
2023-12-09T20:00:10Z
5
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:google/bert_uncased_L-2_H-128_A-2", "base_model:finetune:google/bert_uncased_L-2_H-128_A-2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-12-09T20:00:08Z
--- license: apache-2.0 base_model: google/bert_uncased_L-2_H-128_A-2 tags: - generated_from_trainer model-index: - name: TinyBert-Berttoken-Toxigen-pretrain results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TinyBert-Berttoken-Toxigen-pretrain This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
LoneStriker/SG-Raccoon-Yi-200k-2.0-5.0bpw-h6-exl2
LoneStriker
2023-12-09T19:45:46Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T22:44:37Z
--- license: other license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE language: - en, pipeline_tag: conversational --- <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/644ba0c76ebb3ebf7264dbe9/PWn9I-0XH7kSP_YXcyxIg.png" width="400"/> </p> --- # SG Raccoon 55B 2.0 The first 55B auto-regressive causal LM created by combining 2x finetuned llamafied [Yi 34b](https://huggingface.co/01-ai/Yi-34B) with *200K context* into one. # Prompting Format ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ``` # Merge process The models used in the merge are [Tess-M-v1.3](https://huggingface.co/migtissera/Tess-M-v1.3/) and [airoboros-3_1-yi-34b-200k](bhenrym14/airoboros-3_1-yi-34b-200k). The layer ranges used are as follows: ```yaml - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [0, 14] - model: migtissera/Tess-M-v1.3 layer_range: [7, 21] - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [15, 29] - model: migtissera/Tess-M-v1.3 layer_range: [22, 36] - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [30, 44] - model: migtissera/Tess-M-v1.3 layer_range: [37, 51] - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [45, 59] ``` # Tips Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP (and no other samplers) if output doesn't seem right. Yi tends to run "hot" by default. Sometimes the model "spells out" the stop token as </s> like Capybara, so you may need to add </s> as an additional stopping condition. # Benchmarks Coming soon. # Acknowledgements - Special thanks to [MSS](https://milanosamplesale.com/) for sponsoring this project - [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit). - Great thanks to [@Undi95](https://huggingface.co/Undi95) for helping figuring out model merge options - Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing models - This merged model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b)
LoneStriker/SG-Raccoon-Yi-200k-2.0-6.0bpw-h6-exl2
LoneStriker
2023-12-09T19:45:27Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T23:42:37Z
--- license: other license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE language: - en, pipeline_tag: conversational --- <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/644ba0c76ebb3ebf7264dbe9/PWn9I-0XH7kSP_YXcyxIg.png" width="400"/> </p> --- # SG Raccoon 55B 2.0 The first 55B auto-regressive causal LM created by combining 2x finetuned llamafied [Yi 34b](https://huggingface.co/01-ai/Yi-34B) with *200K context* into one. # Prompting Format ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ``` # Merge process The models used in the merge are [Tess-M-v1.3](https://huggingface.co/migtissera/Tess-M-v1.3/) and [airoboros-3_1-yi-34b-200k](bhenrym14/airoboros-3_1-yi-34b-200k). The layer ranges used are as follows: ```yaml - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [0, 14] - model: migtissera/Tess-M-v1.3 layer_range: [7, 21] - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [15, 29] - model: migtissera/Tess-M-v1.3 layer_range: [22, 36] - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [30, 44] - model: migtissera/Tess-M-v1.3 layer_range: [37, 51] - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [45, 59] ``` # Tips Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP (and no other samplers) if output doesn't seem right. Yi tends to run "hot" by default. Sometimes the model "spells out" the stop token as </s> like Capybara, so you may need to add </s> as an additional stopping condition. # Benchmarks Coming soon. # Acknowledgements - Special thanks to [MSS](https://milanosamplesale.com/) for sponsoring this project - [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit). - Great thanks to [@Undi95](https://huggingface.co/Undi95) for helping figuring out model merge options - Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing models - This merged model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b)
LoneStriker/SG-Raccoon-Yi-200k-2.0-4.65bpw-h6-exl2
LoneStriker
2023-12-09T19:45:20Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-08T21:47:05Z
--- license: other license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE language: - en, pipeline_tag: conversational --- <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/644ba0c76ebb3ebf7264dbe9/PWn9I-0XH7kSP_YXcyxIg.png" width="400"/> </p> --- # SG Raccoon 55B 2.0 The first 55B auto-regressive causal LM created by combining 2x finetuned llamafied [Yi 34b](https://huggingface.co/01-ai/Yi-34B) with *200K context* into one. # Prompting Format ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ``` # Merge process The models used in the merge are [Tess-M-v1.3](https://huggingface.co/migtissera/Tess-M-v1.3/) and [airoboros-3_1-yi-34b-200k](bhenrym14/airoboros-3_1-yi-34b-200k). The layer ranges used are as follows: ```yaml - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [0, 14] - model: migtissera/Tess-M-v1.3 layer_range: [7, 21] - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [15, 29] - model: migtissera/Tess-M-v1.3 layer_range: [22, 36] - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [30, 44] - model: migtissera/Tess-M-v1.3 layer_range: [37, 51] - model: bhenrym14/airoboros-3_1-yi-34b-200k layer_range: [45, 59] ``` # Tips Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP (and no other samplers) if output doesn't seem right. Yi tends to run "hot" by default. Sometimes the model "spells out" the stop token as </s> like Capybara, so you may need to add </s> as an additional stopping condition. # Benchmarks Coming soon. # Acknowledgements - Special thanks to [MSS](https://milanosamplesale.com/) for sponsoring this project - [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit). - Great thanks to [@Undi95](https://huggingface.co/Undi95) for helping figuring out model merge options - Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing models - This merged model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b)
tylercross/socrates_no_context
tylercross
2023-12-09T19:44:22Z
6
0
transformers
[ "transformers", "mistral", "text-generation", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:quantized:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2023-12-08T20:41:10Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: qlora-out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # qlora-out This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3313 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4833 | 0.13 | 1 | 2.5152 | | 2.5615 | 0.26 | 2 | 2.5078 | | 2.4965 | 0.39 | 3 | 2.4691 | | 2.3902 | 0.52 | 4 | 2.4249 | | 2.3629 | 0.65 | 5 | 2.3824 | | 2.2324 | 0.77 | 6 | 2.3441 | | 2.1907 | 0.9 | 7 | 2.3313 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
ujjirox/mistral-7b-instruct
ujjirox
2023-12-09T19:35:01Z
4
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "pretrained", "en", "arxiv:2310.06825", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T19:35:01Z
--- license: apache-2.0 pipeline_tag: text-generation language: - en tags: - pretrained inference: parameters: temperature: 0.7 --- # Model Card for Mistral-7B-v0.1 The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested. For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/). ## Model Architecture Mistral-7B-v0.1 is a transformer model, with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Troubleshooting - If you see the following error: ``` KeyError: 'mistral' ``` - Or: ``` NotImplementedError: Cannot copy out of meta tensor; no data! ``` Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer. ## Notice Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
MadFritz/PixelCopter
MadFritz
2023-12-09T19:29:08Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T15:26:08Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 18.40 +/- 12.58 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
PatrickML/whisper_small
PatrickML
2023-12-09T19:18:59Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-12-05T12:33:32Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: whisper_small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_small This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4999 - Wer: 200.55 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.913 | 1.0 | 313 | 0.5209 | 203.95 | | 0.6921 | 2.0 | 626 | 0.4936 | 316.4000 | | 0.4137 | 3.0 | 939 | 0.4999 | 200.55 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
spongeyy/ppo-LunarLander-v2
spongeyy
2023-12-09T19:18:52Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T19:18:34Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.23 +/- 23.80 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jaigouk/MetaMath-Cybertron-Starling-Ruby
jaigouk
2023-12-09T19:16:42Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:Q-bert/MetaMath-Cybertron-Starling", "base_model:finetune:Q-bert/MetaMath-Cybertron-Starling", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T18:10:26Z
--- license: cc-by-nc-4.0 base_model: Q-bert/MetaMath-Cybertron-Starling tags: - generated_from_trainer model-index: - name: MetaMath-Cybertron-Starling-Ruby results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MetaMath-Cybertron-Starling-Ruby This model is a fine-tuned version of [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0319 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0913 | 0.39 | 50 | 1.0976 | | 1.0399 | 0.78 | 100 | 1.0319 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
Yntec/GimmeDatDing
Yntec
2023-12-09T19:01:46Z
996
5
diffusers
[ "diffusers", "safetensors", "Anime", "Cartoons", "Cute", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-08T07:31:09Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Cartoons - Cute - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # GimmeDatDing Samples and prompt: ![Sample 768](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/O-bCEgNxkE0jkza3-pnWF.png) ![Sample 512](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/woMBJfeqlIfy2PKnPhRlu.png) highquality, masterpiece, 1girl, Chi-Chi, :D, close up, smile, arms up, pink helmet, black hair, black eyes, blush, white teeth, bikini armor, aqua cape, pink gloves, pink boots, cleavage. cave, rock, mountain. blue collar
taysmith/Taxi-v3
taysmith
2023-12-09T19:01:02Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T19:00:59Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="taysmith/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
linqus/dummy
linqus
2023-12-09T18:59:59Z
4
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-12-09T18:34:33Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ekkologico/Llama-2-13b-chat-python_code_instructions_nampdn-ai_tiny-codes
Ekkologico
2023-12-09T18:43:57Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-13b-chat-hf", "base_model:adapter:meta-llama/Llama-2-13b-chat-hf", "region:us" ]
null
2023-11-17T17:19:53Z
--- library_name: peft base_model: meta-llama/Llama-2-13b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1.dev0
bodam/model_lora6
bodam
2023-12-09T18:27:27Z
0
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-12-09T18:12:02Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a olis chair tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - bodam/model_lora6 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a olis chair using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
praxsri/ppo-LunarLander-v2
praxsri
2023-12-09T18:21:29Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T18:21:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.97 +/- 19.64 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
SoniaBusi/architettura
SoniaBusi
2023-12-09T18:09:51Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-09T18:09:51Z
--- license: creativeml-openrail-m ---
cherry06/my-pet-dog
cherry06
2023-12-09T18:08:56Z
6
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-09T18:05:00Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by cherry06 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/cherry06/my-pet-dog/resolve/main/sample_images/vwc_(2)_(2).jpg) ![1](https://huggingface.co/cherry06/my-pet-dog/resolve/main/sample_images/vwc_(7)_(2).jpg) ![2](https://huggingface.co/cherry06/my-pet-dog/resolve/main/sample_images/vwc_(5)_(2).jpg) ![3](https://huggingface.co/cherry06/my-pet-dog/resolve/main/sample_images/vwc_(8)_(2).jpg) ![4](https://huggingface.co/cherry06/my-pet-dog/resolve/main/sample_images/vwc_(1)_(1).jpg) ![5](https://huggingface.co/cherry06/my-pet-dog/resolve/main/sample_images/vwc_(9)_(2).jpg) ![6](https://huggingface.co/cherry06/my-pet-dog/resolve/main/sample_images/vwc_(3)_(2).jpg) ![7](https://huggingface.co/cherry06/my-pet-dog/resolve/main/sample_images/vwc_(4)_(2).jpg) ![8](https://huggingface.co/cherry06/my-pet-dog/resolve/main/sample_images/vwc_(6)_(2).jpg)
alac/Waxwing-Storytelling-70B-exl2
alac
2023-12-09T18:01:40Z
0
1
null
[ "llama-2", "instruct", "instruction", "writing", "story", "text-generation", "en", "license:other", "region:us" ]
text-generation
2023-12-09T17:59:05Z
--- language: - en tags: - llama-2 - instruct - instruction - writing - story pipeline_tag: text-generation license: other --- # Waxwing-Storytelling-70B-exl2 model card Waxwing is a storytelling lora for Llama 2 70B. - Guide the story with Waxwing's turn-based instruction system. - Tailor the feel of your story using style tags. - Experience storytelling free of ChatGPT's idiosyncrasies, thanks to a "human-generated" dataset of public domain writing. Waxwing avoids GPT-isms like positivity bias, "bond" emphasis, rushed endings and exaggerated stylistic tics. Waxwing is available: - LoRA: As a LoRA on the [main branch](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/main) and can be applied at runtime on any variant of the Llama 2 70B base model. - 16fp model: Merged into the base Llama 2 model, in full precision in the [16fp](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/16fp) branch. - Quantized for used with Exllama 2: - [2.5bpw](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/2.5bpw) - [3.0bpw](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/3.0bpw) - [4.65bpw](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/4.65bpw) - [6.0bpw](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/6.0bpw) - [8.0bpw](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/8.0bpw) By using this model, you take full responsibility for anything done with its outputs. ## Model Details ### Model Description - **Developed by:** alac - **Model Type:** QLoRA - **Finetuned from model:** Llama-2 70B - **Language(s):** English ### Dataset Waxwing was trained with a small dataset gathered from public domain writing. The exact dataset will remain private, but the code used to generate prompts and metadata is available on [github](https://github.com/alac/txt_to_dataset). Upstage's [SOLAR](https://huggingface.co/upstage/SOLAR-0-70b-16bit) model was used to tag the dataset. ### Prompt Template ``` ### System: A chat between a user and a writing assistant. {context} ### User: {style tags} Write a scene where: {events that should happen in the next scene} ### Assistant: {output} ``` `context` is an optional story synopsis. `style tags` should be a string along the lines of: ``` Tone: {list of tones}. Writing style: {list of writing styles}. Written with {slow|medium|fast} pacing, in moment to moment detail, in {abstract|selective|vivid sensory} detail, from a {First|Third Person (Character)} perspective. ``` The exact values it was trained on are in the `dataset_tags.json` file. Anecdotally, it works better with a subset of the style tags used (`Tone: tense`) or with tags that are complementary (`Tone: tense, mysterious. Writing style: dramatic. Written in abstract detail.`). It's unclear how well Waxwing responds to tags that it was not trained on (e.g. 'genre'). For SillyTavern users, the `style tags` work well in the "Author's Note" field at depth 1. User messages should begin with `Write a scene where: `; to continue a scene, just type `continue`. Most testing was done using the [Genesis](https://github.com/SillyTavern/SillyTavern/blob/8e73882c9ba7301c9163befbe445686a79d4a9a8/public/TextGen%20Settings/NovelAI%20(Genesis).settings) preset. ### Training Waxwing was trained on a single machine with 72GB of VRAM. The training parameters are available in the `training_parameters.json` file of the main branch. The software used to train was FartyPants' [Training_PRO](https://github.com/FartyPants/Training_PRO) extension for the Oobabooga Text Generation WebUI.
Weni/ZeroShot-3.1.0-Mistral-7b-Multilanguage-3.0.3
Weni
2023-12-09T17:50:32Z
0
0
peft
[ "peft", "pytorch", "safetensors", "mistral", "region:us" ]
null
2023-12-09T14:16:02Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
hkivancoral/smids_3x_deit_base_adamax_00001_fold5
hkivancoral
2023-12-09T17:31:17Z
5
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-base-patch16-224", "base_model:finetune:facebook/deit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-09T16:59:07Z
--- license: apache-2.0 base_model: facebook/deit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smids_3x_deit_base_adamax_00001_fold5 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.9016666666666666 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_3x_deit_base_adamax_00001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6560 - Accuracy: 0.9017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3528 | 1.0 | 225 | 0.3515 | 0.845 | | 0.2402 | 2.0 | 450 | 0.2867 | 0.88 | | 0.1984 | 3.0 | 675 | 0.2729 | 0.8917 | | 0.0996 | 4.0 | 900 | 0.2824 | 0.8917 | | 0.0553 | 5.0 | 1125 | 0.2965 | 0.89 | | 0.0276 | 6.0 | 1350 | 0.3082 | 0.8833 | | 0.0633 | 7.0 | 1575 | 0.3636 | 0.8917 | | 0.0425 | 8.0 | 1800 | 0.3934 | 0.8917 | | 0.0389 | 9.0 | 2025 | 0.4319 | 0.8867 | | 0.0013 | 10.0 | 2250 | 0.4665 | 0.885 | | 0.002 | 11.0 | 2475 | 0.4939 | 0.89 | | 0.0291 | 12.0 | 2700 | 0.5069 | 0.8983 | | 0.0096 | 13.0 | 2925 | 0.5116 | 0.8967 | | 0.0004 | 14.0 | 3150 | 0.5365 | 0.89 | | 0.0003 | 15.0 | 3375 | 0.5451 | 0.8933 | | 0.0001 | 16.0 | 3600 | 0.5523 | 0.9017 | | 0.0001 | 17.0 | 3825 | 0.5663 | 0.895 | | 0.0002 | 18.0 | 4050 | 0.6978 | 0.88 | | 0.0158 | 19.0 | 4275 | 0.5838 | 0.8983 | | 0.0001 | 20.0 | 4500 | 0.6267 | 0.8917 | | 0.0001 | 21.0 | 4725 | 0.6223 | 0.8833 | | 0.0001 | 22.0 | 4950 | 0.5930 | 0.8983 | | 0.0001 | 23.0 | 5175 | 0.6068 | 0.8967 | | 0.01 | 24.0 | 5400 | 0.5979 | 0.895 | | 0.0001 | 25.0 | 5625 | 0.6216 | 0.885 | | 0.0001 | 26.0 | 5850 | 0.6186 | 0.89 | | 0.0001 | 27.0 | 6075 | 0.6718 | 0.88 | | 0.0001 | 28.0 | 6300 | 0.6180 | 0.8917 | | 0.0037 | 29.0 | 6525 | 0.6199 | 0.895 | | 0.0 | 30.0 | 6750 | 0.6476 | 0.8933 | | 0.0 | 31.0 | 6975 | 0.6341 | 0.8933 | | 0.0 | 32.0 | 7200 | 0.6370 | 0.895 | | 0.0032 | 33.0 | 7425 | 0.6400 | 0.895 | | 0.0 | 34.0 | 7650 | 0.6373 | 0.8933 | | 0.0026 | 35.0 | 7875 | 0.6425 | 0.8967 | | 0.0 | 36.0 | 8100 | 0.6390 | 0.8983 | | 0.0053 | 37.0 | 8325 | 0.6460 | 0.895 | | 0.0 | 38.0 | 8550 | 0.6514 | 0.8933 | | 0.0 | 39.0 | 8775 | 0.6474 | 0.8983 | | 0.0 | 40.0 | 9000 | 0.6511 | 0.8933 | | 0.0 | 41.0 | 9225 | 0.6568 | 0.8933 | | 0.0 | 42.0 | 9450 | 0.6574 | 0.8933 | | 0.0026 | 43.0 | 9675 | 0.6562 | 0.8967 | | 0.0 | 44.0 | 9900 | 0.6548 | 0.8983 | | 0.0 | 45.0 | 10125 | 0.6524 | 0.895 | | 0.0 | 46.0 | 10350 | 0.6552 | 0.8983 | | 0.0 | 47.0 | 10575 | 0.6559 | 0.8983 | | 0.0 | 48.0 | 10800 | 0.6560 | 0.8983 | | 0.0 | 49.0 | 11025 | 0.6556 | 0.9017 | | 0.0 | 50.0 | 11250 | 0.6560 | 0.9017 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
jhsmith/finetuned_mixed3_1epoch
jhsmith
2023-12-09T17:28:16Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-12-09T17:27:44Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1732 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 200, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 0.0001 }, "scheduler": "warmuplinear", "steps_per_epoch": null, "warmup_steps": 173, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Shaleen123/mistrallite-medical-full
Shaleen123
2023-12-09T17:26:13Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:amazon/MistralLite", "base_model:adapter:amazon/MistralLite", "region:us" ]
null
2023-12-05T16:08:27Z
--- library_name: peft base_model: amazon/MistralLite --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0
alac/Waxwing-Storytelling-70B-LoRA
alac
2023-12-09T17:22:47Z
5
4
null
[ "llama-2", "instruct", "instruction", "writing", "story", "text-generation", "en", "license:other", "region:us" ]
text-generation
2023-12-09T02:54:30Z
--- language: - en tags: - llama-2 - instruct - instruction - writing - story pipeline_tag: text-generation license: other --- # Waxwing-Storytelling-70B-LoRA model card Waxwing is a storytelling lora for Llama 2 70B. - Guide the story with Waxwing's turn-based instruction system. - Tailor the feel of your story using style tags. - Experience storytelling free of ChatGPT's idiosyncrasies, thanks to a "human-generated" dataset of public domain writing. Waxwing avoids GPT-isms like positivity bias, "bond" emphasis, rushed endings and exaggerated stylistic tics. Waxwing is available: - LoRA: As a LoRA on this branch and can be applied at runtime on any variant of the Llama 2 70B base model. - 16fp model: Merged into the base Llama 2 model, in full precision in the [16fp](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/16fp) branch. - Quantized for used with Exllama 2: - [2.5bpw](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/2.5bpw) - [3.0bpw](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/3.0bpw) - [4.65bpw](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/4.65bpw) - [6.0bpw](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/6.0bpw) - [8.0bpw](https://huggingface.co/alac/Waxwing-Storytelling-70B-LoRA/tree/8.0bpw) By using this model, you take full responsibility for anything done with its outputs. ## Model Details ### Model Description - **Developed by:** alac - **Model Type:** QLoRA - **Finetuned from model:** Llama-2 70B - **Language(s):** English ### Dataset Waxwing was trained with a small dataset gathered from public domain writing. The exact dataset will remain private, but the code used to generate prompts and metadata is available on [github](https://github.com/alac/txt_to_dataset). Upstage's [SOLAR](https://huggingface.co/upstage/SOLAR-0-70b-16bit) model was used to tag the dataset. ### Prompt Template ``` ### System: A chat between a user and a writing assistant. {context} ### User: {style tags} Write a scene where: {events that should happen in the next scene} ### Assistant: {output} ``` `context` is an optional story synopsis. `style tags` should be a string along the lines of: ``` Tone: {list of tones}. Writing style: {list of writing styles}. Written with {slow|medium|fast} pacing, in moment to moment detail, in {abstract|selective|vivid sensory} detail, from a {First|Third Person (Character)} perspective. ``` The exact values it was trained on are in the `dataset_tags.json` file. Anecdotally, it works better with a subset of the style tags used (`Tone: tense`) or with tags that are complementary (`Tone: tense, mysterious. Writing style: dramatic. Written in abstract detail.`). It's unclear how well Waxwing responds to tags that it was not trained on (e.g. 'genre'). For SillyTavern users, the `style tags` work well in the "Author's Note" field at depth 1. User messages should begin with `Write a scene where: `; to continue a scene, just type `continue`. Most testing was done using the [Genesis](https://github.com/SillyTavern/SillyTavern/blob/8e73882c9ba7301c9163befbe445686a79d4a9a8/public/TextGen%20Settings/NovelAI%20(Genesis).settings) preset. ### Training Waxwing was trained on a single machine with 72GB of VRAM. The training parameters are available in the `training_parameters.json` file of the main branch. The software used to train was FartyPants' [Training_PRO](https://github.com/FartyPants/Training_PRO) extension for the Oobabooga Text Generation WebUI.
Korventenn/fr_en-t5-large
Korventenn
2023-12-09T17:20:24Z
14
2
transformers
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "fr", "en", "dataset:giga_fren", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-24T20:51:06Z
--- license: apache-2.0 datasets: - giga_fren language: - fr - en --- # Model Card for fr_en-t5-large <!-- Provide a quick summary of what the model is/does. --> This model has been optimized for French and English language processing while minimizing overall size. To achieve this, I only retained relevant parameters and tokens specific to these two languages, ensuring that performance remains as good as the original mt5. ## Model Details I used a method outlined in a [blog post](https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) by David Dale to downsize the multilingual T5 model for French and English use cases specifically. By utilizing the giga_fren dataset, I was able to successfully reduce the total number of tokens and decrease both the model and tokenizer sizes by 38% and 80% respectively. ### Model Description - **Developed by:** Korventenn - **Model type:** mt5 - **Language(s) (NLP):** French and English - **License:** Apache 2.0 - **Generated from model:** mt5-large ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://colab.research.google.com/drive/1cDWtO5BqWMm_nxnM7lHmPEKMWMejHdBJ#scrollTo=s6ebzRxA1VGv ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> You can use the raw model for any sequence to sequence task that is focused on either french, english or both. ## How to Get Started with the Model Use the code below to get started with the model. ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Korventenn/fr_en-t5-large") model = AutoModelForSeq2SeqLM.from_pretrained("Korventenn/fr_en-t5-large") ``` ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [giga_fren](https://huggingface.co/datasets/giga_fren)
bodam/model_lora5
bodam
2023-12-09T17:16:29Z
0
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-12-09T17:10:15Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a olis chair tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - bodam/model_lora5 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a olis chair using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
gonxatroll/q-taxi-v3
gonxatroll
2023-12-09T17:11:37Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T17:11:36Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="gonxatroll/q-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jhsmith/finetuned_random6_1epoch
jhsmith
2023-12-09T17:09:46Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-12-09T05:12:20Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1754 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 200, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 0.0001 }, "scheduler": "warmuplinear", "steps_per_epoch": null, "warmup_steps": 175, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
openskyml/overall-v1
openskyml
2023-12-09T17:04:00Z
22
4
diffusers
[ "diffusers", "onnx", "safetensors", "text-to-image", "diffusion", "overall", "openskyml", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-09T09:40:03Z
--- inference: parameters: steps: 50 guidance_scale: 7 width: 512 height: 512 license: other license_name: overall-license license_link: https://hf.co/openskyml/overall-v1/blob/main/OVERALL-LICENSE.txt tags: - text-to-image - diffusion - overall - openskyml library_name: diffusers pipeline_tag: text-to-image --- # Overall V1 <img src="https://huggingface.co/ehristoforu/6786578687/resolve/main/assets/banner/example.png" alt="overall-banner" style="width:85%; height:85%;"> Overall is not just an ordinary Diffusion model, it was trained on a huge image dataset, and you can use it freely right now! ## Model Details ### Model Description The model was trained using Dreambooth Training based on the Stable Diffusion 1.5 model. - **Developed by:** [OpenSkyML](https://hf.co/openskyml) - **Model type:** [Text-to-Image](https://huggingface.co/models?pipeline_tag=text-to-image&sort=trending) - **License:** [Overall License](https://huggingface.co/openskyml/overall-v1/blob/main/OVERALL-LICENSE.txt) #### Read more on [our paper](https://huggingface.co/openskyml/overall-v1/blob/main/assets/papers/en.md)
hkivancoral/smids_3x_deit_base_adamax_00001_fold4
hkivancoral
2023-12-09T16:58:19Z
5
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-base-patch16-224", "base_model:finetune:facebook/deit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-09T16:26:14Z
--- license: apache-2.0 base_model: facebook/deit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smids_3x_deit_base_adamax_00001_fold4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.88 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_3x_deit_base_adamax_00001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9973 - Accuracy: 0.88 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3446 | 1.0 | 225 | 0.3635 | 0.845 | | 0.1918 | 2.0 | 450 | 0.3231 | 0.8667 | | 0.1259 | 3.0 | 675 | 0.3259 | 0.8767 | | 0.1244 | 4.0 | 900 | 0.3347 | 0.8767 | | 0.0882 | 5.0 | 1125 | 0.3474 | 0.875 | | 0.0401 | 6.0 | 1350 | 0.4001 | 0.8767 | | 0.0341 | 7.0 | 1575 | 0.4822 | 0.8717 | | 0.0545 | 8.0 | 1800 | 0.4972 | 0.8783 | | 0.0266 | 9.0 | 2025 | 0.6098 | 0.8633 | | 0.0023 | 10.0 | 2250 | 0.5905 | 0.87 | | 0.0071 | 11.0 | 2475 | 0.6351 | 0.8733 | | 0.0009 | 12.0 | 2700 | 0.6746 | 0.8767 | | 0.0003 | 13.0 | 2925 | 0.7065 | 0.8833 | | 0.0003 | 14.0 | 3150 | 0.7353 | 0.8667 | | 0.0002 | 15.0 | 3375 | 0.7491 | 0.8733 | | 0.0001 | 16.0 | 3600 | 0.7500 | 0.8717 | | 0.0001 | 17.0 | 3825 | 0.7514 | 0.8733 | | 0.0001 | 18.0 | 4050 | 0.7869 | 0.8817 | | 0.0001 | 19.0 | 4275 | 0.7968 | 0.8717 | | 0.0001 | 20.0 | 4500 | 0.8160 | 0.8767 | | 0.0 | 21.0 | 4725 | 0.8138 | 0.8783 | | 0.0 | 22.0 | 4950 | 0.8558 | 0.87 | | 0.0001 | 23.0 | 5175 | 0.8592 | 0.87 | | 0.0 | 24.0 | 5400 | 0.8588 | 0.8717 | | 0.0001 | 25.0 | 5625 | 0.8863 | 0.8767 | | 0.0 | 26.0 | 5850 | 0.8795 | 0.88 | | 0.0 | 27.0 | 6075 | 0.8854 | 0.8783 | | 0.0 | 28.0 | 6300 | 0.8854 | 0.8767 | | 0.0 | 29.0 | 6525 | 0.8995 | 0.875 | | 0.0 | 30.0 | 6750 | 0.9074 | 0.8767 | | 0.0 | 31.0 | 6975 | 0.9134 | 0.875 | | 0.0 | 32.0 | 7200 | 0.9303 | 0.8767 | | 0.0 | 33.0 | 7425 | 0.9263 | 0.875 | | 0.0 | 34.0 | 7650 | 0.9402 | 0.8783 | | 0.0 | 35.0 | 7875 | 0.9446 | 0.8783 | | 0.0 | 36.0 | 8100 | 0.9522 | 0.88 | | 0.0 | 37.0 | 8325 | 0.9608 | 0.8783 | | 0.0 | 38.0 | 8550 | 0.9621 | 0.88 | | 0.0 | 39.0 | 8775 | 0.9583 | 0.8733 | | 0.0 | 40.0 | 9000 | 0.9729 | 0.8783 | | 0.0 | 41.0 | 9225 | 0.9763 | 0.88 | | 0.0026 | 42.0 | 9450 | 0.9780 | 0.88 | | 0.0 | 43.0 | 9675 | 0.9802 | 0.8783 | | 0.0024 | 44.0 | 9900 | 0.9867 | 0.88 | | 0.0 | 45.0 | 10125 | 0.9887 | 0.8783 | | 0.0 | 46.0 | 10350 | 0.9913 | 0.88 | | 0.0 | 47.0 | 10575 | 0.9944 | 0.88 | | 0.0 | 48.0 | 10800 | 0.9959 | 0.88 | | 0.0 | 49.0 | 11025 | 0.9981 | 0.88 | | 0.0 | 50.0 | 11250 | 0.9973 | 0.88 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
jhsmith/finetuned_random6_3epochs
jhsmith
2023-12-09T16:56:39Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-12-09T16:56:09Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1754 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 200, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 0.0001 }, "scheduler": "warmuplinear", "steps_per_epoch": null, "warmup_steps": 526, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
LoneStriker/airoboros-l2-70b-gpt4-1.4.1-2.4bpw-h6-exl2-2
LoneStriker
2023-12-09T16:54:21Z
6
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.4.1", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T16:40:48Z
--- license: other datasets: - jondurbin/airoboros-gpt4-1.4.1 --- ### 2.4-bit Exllama v2 quant of airoboros-gpt4-1.4.1 using new quant method Simple quantization of original model. This model should fit on a single 24 GB VRAM GPU where Exlalama v2 is supported. Should also support full 4096 context on a single GPU, without dsktop apps also running on the same GPU. Ideally, the GPU would be completely empty of any desktop or apps. ### Overview Llama 2 70b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1 See the previous llama 65b model card for info: https://hf.co/jondurbin/airoboros-65b-gpt4-1.4 ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions Base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
Myashka/gpt-imdb-ipo-beta_0.3
Myashka
2023-12-09T16:50:23Z
14
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:lvwerra/gpt2-imdb", "base_model:finetune:lvwerra/gpt2-imdb", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T08:13:05Z
--- base_model: lvwerra/gpt2-imdb tags: - generated_from_trainer model-index: - name: gpt-imdb-ipo-beta_0.3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-imdb-ipo-beta_0.3 This model is a fine-tuned version of [lvwerra/gpt2-imdb](https://huggingface.co/lvwerra/gpt2-imdb) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8601 - Rewards/chosen: -0.2473 - Rewards/rejected: -0.6141 - Rewards/accuracies: 0.8271 - Rewards/margins: 0.3668 - Logps/rejected: -265.7321 - Logps/chosen: -236.0896 - Logits/rejected: -31.6527 - Logits/chosen: -31.7977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 150 - training_steps: 7197 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 5.822 | 0.21 | 500 | 19.5830 | -0.0268 | -0.3320 | 0.6708 | 0.3052 | -264.7920 | -235.3544 | -33.5002 | -33.8198 | | 6.8677 | 0.42 | 1000 | 18.7557 | -0.0552 | -0.3293 | 0.5917 | 0.2741 | -264.7829 | -235.4492 | -35.5852 | -35.8178 | | 12.3698 | 0.63 | 1500 | 36.0453 | -0.1426 | -0.5467 | 0.6771 | 0.4041 | -265.5075 | -235.7406 | -34.3816 | -34.5936 | | 7.8347 | 0.83 | 2000 | 38.2624 | -0.0799 | -0.3485 | 0.6500 | 0.2687 | -264.8470 | -235.5314 | -33.2874 | -33.4310 | | 9.184 | 1.04 | 2500 | 14.9546 | -0.3389 | -0.7127 | 0.6875 | 0.3739 | -266.0610 | -236.3948 | -32.7912 | -32.9463 | | 11.1603 | 1.25 | 3000 | 15.5236 | -0.0513 | -0.3736 | 0.7000 | 0.3223 | -264.9306 | -235.4362 | -33.3399 | -33.5624 | | 16.5516 | 1.46 | 3500 | 8.6118 | -0.1177 | -0.5526 | 0.7438 | 0.4349 | -265.5274 | -235.6576 | -31.9816 | -32.1630 | | 5.2761 | 1.67 | 4000 | 5.2168 | -0.1495 | -0.5364 | 0.7417 | 0.3869 | -265.4733 | -235.7637 | -32.2719 | -32.3991 | | 2.9326 | 1.88 | 4500 | 4.2332 | -0.2284 | -0.6043 | 0.7646 | 0.3759 | -265.6996 | -236.0266 | -32.0240 | -32.1547 | | 2.9814 | 2.08 | 5000 | 3.3498 | -0.2188 | -0.6063 | 0.7792 | 0.3874 | -265.7062 | -235.9947 | -31.8376 | -31.9728 | | 1.8651 | 2.29 | 5500 | 2.8900 | -0.2624 | -0.6313 | 0.7896 | 0.3688 | -265.7895 | -236.1400 | -31.4502 | -31.5973 | | 4.5849 | 2.5 | 6000 | 2.2055 | -0.2771 | -0.6338 | 0.7833 | 0.3567 | -265.7979 | -236.1888 | -31.5011 | -31.6468 | | 1.7322 | 2.71 | 6500 | 1.9194 | -0.2534 | -0.6145 | 0.8208 | 0.3611 | -265.7336 | -236.1099 | -31.6632 | -31.8054 | | 1.1697 | 2.92 | 7000 | 1.8601 | -0.2473 | -0.6141 | 0.8271 | 0.3668 | -265.7321 | -236.0896 | -31.6527 | -31.7977 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1 - Datasets 2.15.0 - Tokenizers 0.15.0
kunal2504/my-pet-dog
kunal2504
2023-12-09T16:44:10Z
0
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-09T16:39:54Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by kunal2504 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/kunal2504/my-pet-dog/resolve/main/sample_images/xzg_(3).jpg) ![1](https://huggingface.co/kunal2504/my-pet-dog/resolve/main/sample_images/xzg_(2).webp) ![2](https://huggingface.co/kunal2504/my-pet-dog/resolve/main/sample_images/xzg_(1).jpg) ![3](https://huggingface.co/kunal2504/my-pet-dog/resolve/main/sample_images/xzg_(2).jpg) ![4](https://huggingface.co/kunal2504/my-pet-dog/resolve/main/sample_images/xzg_(1).webp)
isilberfin/sdxl-lora-berfin
isilberfin
2023-12-09T16:43:02Z
0
0
null
[ "text-to-image", "license:apache-2.0", "region:us" ]
text-to-image
2023-12-08T11:42:48Z
--- license: apache-2.0 pipeline_tag: text-to-image fine_tuned_from : stabilityai/stable-diffusion-xl-base-1.0 ---
SpartanLondoner/Reinforce-Pixelcopter-PLE-v0
SpartanLondoner
2023-12-09T16:41:22Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T16:40:36Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 40.30 +/- 38.72 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
LoneStriker/airoboros-l2-70b-gpt4-1.4.1-2.4bpw-h6-exl2
LoneStriker
2023-12-09T16:31:16Z
9
6
transformers
[ "transformers", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.4.1", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-09-18T09:42:21Z
--- license: other datasets: - jondurbin/airoboros-gpt4-1.4.1 --- ### 2.4-bit Exllama v2 quant of airoboros-gpt4-1.4.1 Simple quantization of original model. This model should fit on a single 24 GB VRAM GPU where Exlalama v2 is supported. Should also support full 4096 context on a single GPU, without dsktop apps also running on the same GPU. Ideally, the GPU would be completely empty of any desktop or apps. ### Overview Llama 2 70b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1 See the previous llama 65b model card for info: https://hf.co/jondurbin/airoboros-65b-gpt4-1.4 ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions Base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
LarryAIDraw/Tess_Kotkin_LORA_V1_0
LarryAIDraw
2023-12-09T16:17:45Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-09T15:59:19Z
--- license: creativeml-openrail-m --- https://civitai.com/models/226166/tess-kotkin-snowbreak-containment-zonelora
TheBloke/Mistral-7B-Instruct-v0.1-GGUF
TheBloke
2023-12-09T16:09:28Z
127,715
522
transformers
[ "transformers", "gguf", "mistral", "finetuned", "text-generation", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:quantized:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
text-generation
2023-09-27T17:49:54Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.1 inference: false license: apache-2.0 model_creator: Mistral AI model_name: Mistral 7B Instruct v0.1 model_type: mistral pipeline_tag: text-generation prompt_template: '<s>[INST]{prompt} [/INST] ' quantized_by: TheBloke tags: - finetuned --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mistral 7B Instruct v0.1 - GGUF - Model creator: [Mistral AI](https://huggingface.co/mistralai) - Original model: [Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) <!-- description start --> ## Description This repo contains GGUF format model files for [Mistral AI's Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF) * [Mistral AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Mistral ``` <s>[INST] {prompt} [/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. Sequence length note: The model will work at sequence lengths of 4096, or lower. GGUF does not yet have support for the new sliding window sequence length mode, so longer sequence lengths are not supported. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mistral-7b-instruct-v0.1.Q2_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [mistral-7b-instruct-v0.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [mistral-7b-instruct-v0.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [mistral-7b-instruct-v0.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [mistral-7b-instruct-v0.1.Q4_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mistral-7b-instruct-v0.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [mistral-7b-instruct-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [mistral-7b-instruct-v0.1.Q5_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mistral-7b-instruct-v0.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [mistral-7b-instruct-v0.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [mistral-7b-instruct-v0.1.Q6_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [mistral-7b-instruct-v0.1.Q8_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct-v0.1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Mistral-7B-Instruct-v0.1-GGUF and below it, a specific filename to download, such as: mistral-7b-instruct-v0.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GGUF mistral-7b-instruct-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GGUF mistral-7b-instruct-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m mistral-7b-instruct-v0.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST]{prompt} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Sequence length can be 4096 or lower. Mistral's sliding window sequence length is not yet supported in llama.cpp, so do not use sequence lengths longer than 4096. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers I have not tested ctransformers with Mistral models. It may work, but will require that you set the `model_type` to `llama` for now, until ctransformers updates with specific support. #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Mistral-7B-Instruct-v0.1-GGUF", model_file="mistral-7b-instruct-v0.1.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Mistral AI's Mistral 7B Instruct v0.1 # Model Card for Mistral-7B-Instruct-v0.1 The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets. For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/) ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") text = """<s>[INST] What is your favourite condiment? [/INST] Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> [INST] Do you have mayonnaise recipes? [/INST]""" encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False) model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Model Architecture This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. <!-- original-model-card end -->
svarna/llama-2-7b-multi-mgt
svarna
2023-12-09T16:08:37Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2023-12-09T15:55:25Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.2
akanchha/layoutlmv3-finetuned-cord_100
akanchha
2023-12-09T16:07:48Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "layoutlmv3", "token-classification", "generated_from_trainer", "dataset:cord-layoutlmv3", "base_model:microsoft/layoutlmv3-base", "base_model:finetune:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-07T20:55:05Z
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv3-base tags: - generated_from_trainer datasets: - cord-layoutlmv3 metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-finetuned-cord_100 results: - task: name: Token Classification type: token-classification dataset: name: cord-layoutlmv3 type: cord-layoutlmv3 config: cord split: test args: cord metrics: - name: Precision type: precision value: 0.9561011904761905 - name: Recall type: recall value: 0.9618263473053892 - name: F1 type: f1 value: 0.958955223880597 - name: Accuracy type: accuracy value: 0.9702886247877759 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-cord_100 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 0.1726 - Precision: 0.9561 - Recall: 0.9618 - F1: 0.9590 - Accuracy: 0.9703 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.56 | 250 | 1.0075 | 0.7597 | 0.8046 | 0.7815 | 0.8145 | | 1.3907 | 3.12 | 500 | 0.5155 | 0.8388 | 0.8683 | 0.8533 | 0.8841 | | 1.3907 | 4.69 | 750 | 0.3486 | 0.8917 | 0.9117 | 0.9016 | 0.9283 | | 0.3755 | 6.25 | 1000 | 0.2722 | 0.9211 | 0.9356 | 0.9283 | 0.9435 | | 0.3755 | 7.81 | 1250 | 0.2399 | 0.9356 | 0.9461 | 0.9408 | 0.9533 | | 0.1857 | 9.38 | 1500 | 0.2170 | 0.9376 | 0.9454 | 0.9415 | 0.9542 | | 0.1857 | 10.94 | 1750 | 0.1917 | 0.9510 | 0.9588 | 0.9549 | 0.9660 | | 0.1236 | 12.5 | 2000 | 0.1821 | 0.9502 | 0.9573 | 0.9538 | 0.9652 | | 0.1236 | 14.06 | 2250 | 0.1870 | 0.9538 | 0.9588 | 0.9563 | 0.9669 | | 0.0858 | 15.62 | 2500 | 0.1741 | 0.9583 | 0.9633 | 0.9608 | 0.9711 | | 0.0858 | 17.19 | 2750 | 0.1726 | 0.9561 | 0.9611 | 0.9586 | 0.9690 | | 0.0708 | 18.75 | 3000 | 0.1726 | 0.9561 | 0.9618 | 0.9590 | 0.9703 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
TheBloke/Synthia-7B-v3.0-AWQ
TheBloke
2023-12-09T16:03:23Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "base_model:migtissera/Synthia-7B-v3.0", "base_model:quantized:migtissera/Synthia-7B-v3.0", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2023-12-09T15:03:01Z
--- base_model: migtissera/Synthia-7B-v3.0 inference: false license: apache-2.0 model_creator: Migel Tissera model_name: Synthia 7B v3.0 model_type: mistral prompt_template: 'SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Synthia 7B v3.0 - AWQ - Model creator: [Migel Tissera](https://huggingface.co/migtissera) - Original model: [Synthia 7B v3.0](https://huggingface.co/migtissera/Synthia-7B-v3.0) <!-- description start --> ## Description This repo contains AWQ model files for [Migel Tissera's Synthia 7B v3.0](https://huggingface.co/migtissera/Synthia-7B-v3.0). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Synthia-7B-v3.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-7B-v3.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-7B-v3.0-GGUF) * [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-7B-v3.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Synthia-CoT ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Synthia-7B-v3.0-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Synthia-7B-v3.0-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Synthia-7B-v3.0-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Synthia-7B-v3.0-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: {prompt} ASSISTANT: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Synthia-7B-v3.0-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Synthia-7B-v3.0-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: {prompt} ASSISTANT: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Synthia-7B-v3.0-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: {prompt} ASSISTANT: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Migel Tissera's Synthia 7B v3.0 # Synthia-7B-v3.0 SynthIA-7B-v3.0 (Synthetic Intelligent Agent) is a Mistral-7B model trained with guidance on Orca-2 paper. It has been fine-tuned for instruction following as well as having long-form conversations. SynthIA-3.0 dataset contains the Generarized Tree-of-Thought prompt plus 10 more new long-form system contexts. However, in the training phase the system context was removed as suggested in Orca-2 paper. <br> ![Synthia](https://huggingface.co/migtissera/Synthia-7B-v3.0/resolve/main/Synthia-v3.jpg) <br> To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message: ``` Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. ``` ## Evaluation We evaluated Synthia-7B-v3.0 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Section to follow. |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|| |*hellaswag*|acc_norm|| |*mmlu*|acc_norm|| |*truthfulqa_mc*|mc2|| |**Total Average**|-||| <br> ## Example Usage ### Here is prompt format: ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: What is the difference between an Orca, Dolphin and a Seal? ASSISTANT: ``` ### Below shows a code example on how to use this model: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Synthia-7B-v3.0" output_file_path = "./Synthia-7B-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model. <br>
bodam/model_lora3
bodam
2023-12-09T16:02:53Z
0
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-12-09T15:49:20Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a s3f chair tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - bodam/model_lora3 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a s3f chair using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
svarna/llama-7b-mgt
svarna
2023-12-09T16:00:13Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2023-12-09T15:44:35Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.2
SrikarTondapu/my_awesome_model
SrikarTondapu
2023-12-09T15:59:16Z
3
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-08T20:26:18Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: SrikarTondapu/my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # SrikarTondapu/my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0614 - Validation Loss: 0.2340 - Train Accuracy: 0.9298 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2482 | 0.1800 | 0.9314 | 0 | | 0.1306 | 0.1892 | 0.9314 | 1 | | 0.0614 | 0.2340 | 0.9298 | 2 | ### Framework versions - Transformers 4.32.1 - TensorFlow 2.15.0 - Datasets 2.12.0 - Tokenizers 0.13.2
LarryAIDraw/echidna
LarryAIDraw
2023-12-09T15:57:18Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-09T15:49:15Z
--- license: creativeml-openrail-m --- https://civitai.com/models/226253/echidna-yuusha-yamemasu
LarryAIDraw/spgbfMakura-09
LarryAIDraw
2023-12-09T15:56:53Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-09T15:48:26Z
--- license: creativeml-openrail-m --- https://civitai.com/models/226754/makura-oror-granblue-fantasy
LarryAIDraw/yuuki___sao_
LarryAIDraw
2023-12-09T15:56:09Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-09T15:47:02Z
--- license: creativeml-openrail-m --- https://civitai.com/models/226392/yuuki-sword-art-online
LarryAIDraw/fern__sousou_no_frieren__v1
LarryAIDraw
2023-12-09T15:55:21Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-09T15:44:47Z
--- license: creativeml-openrail-m --- https://civitai.com/models/227283/fern-sousou-no-frieren
Viecha55555/a1dutchlang-falcon-7b-old
Viecha55555
2023-12-09T15:36:11Z
0
1
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:vilsonrodrigues/falcon-7b-instruct-sharded", "base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded", "region:us" ]
null
2023-11-12T13:04:06Z
--- library_name: peft base_model: vilsonrodrigues/falcon-7b-instruct-sharded --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.2
truongghieu/deci-finetuned_Prj2
truongghieu
2023-12-09T15:34:09Z
17
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "custom_code", "base_model:Deci/DeciLM-6b-instruct", "base_model:finetune:Deci/DeciLM-6b-instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T15:08:07Z
--- license: other base_model: Deci/DeciLM-6b-instruct tags: - generated_from_trainer model-index: - name: deci-finetuned_Prj2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deci-finetuned_Prj2 This model is a fine-tuned version of [Deci/DeciLM-6b-instruct](https://huggingface.co/Deci/DeciLM-6b-instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
MJerome/dqn-LunarLander-v2
MJerome
2023-12-09T15:23:46Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T15:23:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -37.54 +/- 25.90 name: mean_reward verified: false --- # **DQN** Agent playing **LunarLander-v2** This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hkivancoral/smids_3x_deit_base_adamax_00001_fold1
hkivancoral
2023-12-09T15:19:16Z
5
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-base-patch16-224", "base_model:finetune:facebook/deit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-09T14:47:13Z
--- license: apache-2.0 base_model: facebook/deit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smids_3x_deit_base_adamax_00001_fold1 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.8948247078464107 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_3x_deit_base_adamax_00001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6128 - Accuracy: 0.8948 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3987 | 1.0 | 226 | 0.3422 | 0.8831 | | 0.2561 | 2.0 | 452 | 0.2788 | 0.8998 | | 0.1593 | 3.0 | 678 | 0.2675 | 0.8998 | | 0.1432 | 4.0 | 904 | 0.2725 | 0.8881 | | 0.1177 | 5.0 | 1130 | 0.2842 | 0.8982 | | 0.0548 | 6.0 | 1356 | 0.2961 | 0.9048 | | 0.017 | 7.0 | 1582 | 0.3174 | 0.9032 | | 0.0253 | 8.0 | 1808 | 0.3381 | 0.9032 | | 0.0176 | 9.0 | 2034 | 0.3719 | 0.8998 | | 0.0045 | 10.0 | 2260 | 0.4333 | 0.8948 | | 0.0047 | 11.0 | 2486 | 0.4348 | 0.8932 | | 0.0045 | 12.0 | 2712 | 0.4668 | 0.8982 | | 0.0004 | 13.0 | 2938 | 0.4893 | 0.8998 | | 0.0211 | 14.0 | 3164 | 0.4896 | 0.8948 | | 0.0003 | 15.0 | 3390 | 0.5202 | 0.8932 | | 0.0002 | 16.0 | 3616 | 0.5276 | 0.8881 | | 0.0192 | 17.0 | 3842 | 0.5347 | 0.8932 | | 0.0001 | 18.0 | 4068 | 0.5437 | 0.8982 | | 0.0215 | 19.0 | 4294 | 0.5418 | 0.8915 | | 0.0001 | 20.0 | 4520 | 0.5545 | 0.8932 | | 0.0001 | 21.0 | 4746 | 0.5523 | 0.8948 | | 0.0193 | 22.0 | 4972 | 0.5629 | 0.8915 | | 0.0179 | 23.0 | 5198 | 0.5596 | 0.8948 | | 0.0001 | 24.0 | 5424 | 0.5887 | 0.8965 | | 0.0001 | 25.0 | 5650 | 0.5699 | 0.8982 | | 0.0001 | 26.0 | 5876 | 0.5700 | 0.8948 | | 0.0 | 27.0 | 6102 | 0.5958 | 0.8932 | | 0.0001 | 28.0 | 6328 | 0.5917 | 0.8932 | | 0.0 | 29.0 | 6554 | 0.5908 | 0.8982 | | 0.0001 | 30.0 | 6780 | 0.5873 | 0.8965 | | 0.0001 | 31.0 | 7006 | 0.5806 | 0.8982 | | 0.0 | 32.0 | 7232 | 0.6122 | 0.8932 | | 0.0043 | 33.0 | 7458 | 0.6075 | 0.8932 | | 0.0 | 34.0 | 7684 | 0.5998 | 0.8982 | | 0.0 | 35.0 | 7910 | 0.5938 | 0.8948 | | 0.0 | 36.0 | 8136 | 0.5898 | 0.8932 | | 0.0001 | 37.0 | 8362 | 0.5968 | 0.8948 | | 0.0 | 38.0 | 8588 | 0.6080 | 0.8982 | | 0.0 | 39.0 | 8814 | 0.6052 | 0.8948 | | 0.0 | 40.0 | 9040 | 0.6086 | 0.8965 | | 0.0027 | 41.0 | 9266 | 0.6076 | 0.8965 | | 0.002 | 42.0 | 9492 | 0.6009 | 0.8965 | | 0.0 | 43.0 | 9718 | 0.6115 | 0.8948 | | 0.0 | 44.0 | 9944 | 0.6097 | 0.8965 | | 0.0 | 45.0 | 10170 | 0.6098 | 0.8932 | | 0.0 | 46.0 | 10396 | 0.6112 | 0.8948 | | 0.0 | 47.0 | 10622 | 0.6121 | 0.8948 | | 0.0 | 48.0 | 10848 | 0.6134 | 0.8948 | | 0.0 | 49.0 | 11074 | 0.6129 | 0.8948 | | 0.0 | 50.0 | 11300 | 0.6128 | 0.8948 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
vroon2003/my-images
vroon2003
2023-12-09T15:10:03Z
3
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-09T15:05:35Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### my-images Dreambooth model trained by vroon2003 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/vroon2003/my-images/resolve/main/sample_images/vro_(7).jpg) ![1](https://huggingface.co/vroon2003/my-images/resolve/main/sample_images/vro_(1).jpg) ![2](https://huggingface.co/vroon2003/my-images/resolve/main/sample_images/vro_(3).jpg) ![3](https://huggingface.co/vroon2003/my-images/resolve/main/sample_images/vro_(10).jpg) ![4](https://huggingface.co/vroon2003/my-images/resolve/main/sample_images/vro_(8).jpg) ![5](https://huggingface.co/vroon2003/my-images/resolve/main/sample_images/vro_(9).jpg) ![6](https://huggingface.co/vroon2003/my-images/resolve/main/sample_images/vro_(6).jpg) ![7](https://huggingface.co/vroon2003/my-images/resolve/main/sample_images/vro_(2).jpg) ![8](https://huggingface.co/vroon2003/my-images/resolve/main/sample_images/vro_(4).jpg)
domenicrosati/deberta-v3-large-survey-related_passage_old_facts-rater-gpt4
domenicrosati
2023-12-09T15:08:33Z
3
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-09T12:49:04Z
--- license: mit tags: - generated_from_trainer model-index: - name: deberta-v3-large-survey-related_passage_old_facts-rater-gpt4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-survey-related_passage_old_facts-rater-gpt4 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4590 - Krippendorff: 0.9496 - Spearman: 0.9595 - Absolute Agreement: 0.8618 - Agreement Within One: 0.9422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Krippendorff | Spearman | Absolute Agreement | Agreement Within One | |:-------------:|:-----:|:----:|:---------------:|:------------:|:--------:|:------------------:|:--------------------:| | No log | 1.0 | 50 | 1.9214 | -0.1968 | -0.1743 | 0.25 | 0.6389 | | No log | 2.0 | 100 | 1.9230 | -0.2449 | -0.2660 | 0.2222 | 0.625 | | No log | 3.0 | 150 | 1.9305 | -0.3132 | -0.3197 | 0.2639 | 0.5972 | | No log | 4.0 | 200 | 1.9688 | 0.0758 | 0.1922 | 0.1944 | 0.875 | | No log | 5.0 | 250 | 1.9912 | -0.4897 | 0.1914 | 0.125 | 1.0 | | No log | 6.0 | 300 | 1.9696 | -0.3208 | 0.0287 | 0.125 | 0.9444 | | No log | 7.0 | 350 | 1.9609 | -0.4675 | 0.1913 | 0.125 | 1.0 | | No log | 8.0 | 400 | 1.9717 | -0.1927 | -0.0683 | 0.125 | 0.875 | | No log | 9.0 | 450 | 2.0918 | -0.3136 | -0.1631 | 0.0972 | 0.875 | | 1.7449 | 10.0 | 500 | 2.0623 | 0.0154 | 0.0896 | 0.1944 | 0.8333 | | 1.7449 | 11.0 | 550 | 2.0821 | 0.0052 | 0.0822 | 0.1944 | 0.8333 | | 1.7449 | 12.0 | 600 | 2.0046 | 0.2459 | 0.2394 | 0.2917 | 0.7778 | | 1.7449 | 13.0 | 650 | 2.1779 | 0.1727 | 0.1624 | 0.25 | 0.8333 | | 1.7449 | 14.0 | 700 | 2.2878 | 0.2452 | 0.1854 | 0.2917 | 0.7917 | | 1.7449 | 15.0 | 750 | 2.3372 | 0.2134 | 0.1616 | 0.2639 | 0.7917 | | 1.7449 | 16.0 | 800 | 2.5162 | 0.2219 | 0.1882 | 0.2361 | 0.8194 | | 1.7449 | 17.0 | 850 | 2.7111 | 0.2041 | 0.1544 | 0.2222 | 0.8194 | | 1.7449 | 18.0 | 900 | 2.7977 | 0.2679 | 0.2161 | 0.2639 | 0.7361 | | 1.7449 | 19.0 | 950 | 2.7704 | 0.2809 | 0.2031 | 0.2222 | 0.7917 | | 0.9017 | 20.0 | 1000 | 3.2462 | 0.2204 | 0.1785 | 0.2361 | 0.7361 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.12.1
ammarnasr/codegen-350M-mono-java
ammarnasr
2023-12-09T14:50:02Z
49
0
adapter-transformers
[ "adapter-transformers", "code", "text-generation", "dataset:ammarnasr/the-stack-java-clean", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2023-08-11T14:07:29Z
--- license: mit datasets: - ammarnasr/the-stack-java-clean library_name: adapter-transformers tags: - code pipeline_tag: text-generation language: - code --- # CodeGen (CodeGen-Mono 350M LoRa Java) ## Model description CodeGen LoRa Java is a family of autoregressive language models fine-tuned using LoRa on Different Programming Langauges. ## Training data <!-- https://huggingface.co/datasets/ammarnasr/the-stack-java-clean --> This model was fine-tuned on the cleaned Java subset from TheStack Avilable [here](https://huggingface.co/datasets/ammarnasr/the-stack-java-clean). The data consists of 1 Million Java code files. ## Training procedure This model was fine-tuned using LoRa on 1 T4 GPU. The model was trained for 10,000 steps with batch size of 4. The model was trained using causal language modeling loss. ## Evaluation results We evaluate our models on the MultiPle-E bencchmark. The model achieves 8.9 Pass@10 Rate. ![final_pass_at_10](https://raw.githubusercontent.com/ammarnasr/LLM-for-code-intelligence/main/figs/final_pass_at_10.png) ## Intended Use and Limitations However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code in Java and Python. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftConfig, PeftModel model_name = "ammarnasr/codegen-350M-mono-java" peft_config = PeftConfig.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(peft_config.base_model_name_or_path) model = AutoModelForCausalLM.from_pretrained(peft_config.base_model_name_or_path) model = PeftModel.from_pretrained(model, model_name) model.print_trainable_parameters() text = "public static void main(String[] args) {" input_ids = tokenizer.encode(text, return_tensors="pt") generated_ids = model.generate(input_ids=input_ids, max_length=100) print('Generated: \n') print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ## BibTeX entry and citation info ```bibtex @article{Nijkamp2022ACP, title={A Conversational Paradigm for Program Synthesis}, author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming}, journal={arXiv preprint}, year={2022} } ```
larscarl/ppo-LunarLander-v2
larscarl
2023-12-09T14:44:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T14:44:29Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 288.07 +/- 17.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
yesj1234/zhko_xlsr_100p_run3
yesj1234
2023-12-09T14:44:32Z
8
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "./train_dataset.py", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-12-09T14:42:42Z
--- base_model: zhko_xlsr_100p_run1 tags: - automatic-speech-recognition - ./train_dataset.py - generated_from_trainer model-index: - name: zhko_xlsr_100p_run1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zhko_xlsr_100p_run1 This model is a fine-tuned version of [zhko_xlsr_100p_run1](https://huggingface.co/zhko_xlsr_100p_run1) on the ./TRAIN_DATASET.PY - NA dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.002795480478108e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
Falconsai/intent_classification
Falconsai
2023-12-09T14:40:16Z
992
37
transformers
[ "transformers", "pytorch", "coreml", "safetensors", "distilbert", "text-classification", "en", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-20T04:18:14Z
--- license: apache-2.0 pipeline_tag: text-classification language: - en widget: - text: I ordered from you 2 weeks ago and its stil not here. - text: I need to bring in my daughter for a checkup. --- # Model Card: Fine-Tuned DistilBERT for User Intent Classification ## Model Description The **Fine-Tuned DistilBERT** is a variant of the BERT transformer model, distilled for efficient performance while maintaining high accuracy. It has been adapted and fine-tuned for the specific task of classifying user intent in text data. The model, named "distilbert-base-uncased," is pre-trained on a substantial amount of text data, which allows it to capture semantic nuances and contextual information present in natural language text. It has been fine-tuned with meticulous attention to hyperparameter settings, including batch size and learning rate, to ensure optimal model performance for the user intent classification task. During the fine-tuning process, a batch size of 8 for efficient computation and learning was chosen. Additionally, a learning rate (2e-5) was selected to strike a balance between rapid convergence and steady optimization, ensuring the model not only learns quickly but also steadily refines its capabilities throughout training. This model has been trained on a rather small dataset of under 50k, 100 epochs, specifically designed for user intent classification. The dataset consists of text samples, each labeled with different user intents, such as "information seeking," "question asking," or "opinion expressing." The diversity within the dataset allowed the model to learn to identify user intent accurately. This dataset was carefully curated from a variety of sources. The goal of this meticulous training process is to equip the model with the ability to classify user intent in text data effectively, making it ready to contribute to a wide range of applications involving user interaction analysis and personalization. ## Intended Uses & Limitations ### Intended Uses - **User Intent Classification**: The primary intended use of this model is to classify user intent in text data. It is well-suited for applications that involve understanding user intentions, such as chatbots, virtual assistants, and recommendation systems. ### How to Use To use this model for user intent classification, you can follow these steps: ```markdown from transformers import pipeline classifier = pipeline("text-classification", model="Falconsai/intent_classification") text = "Your text to classify here." result = classifier(text) ``` ### Limitations - **Specialized Task Fine-Tuning**: While the model excels at user intent classification, its performance may vary when applied to other natural language processing tasks. Users interested in employing this model for different tasks should explore fine-tuned versions available in the model hub for optimal results. ## Training Data The model's training data includes a proprietary dataset designed for user intent classification. This dataset comprises a diverse collection of text samples, categorized into various user intent classes. The training process aimed to equip the model with the ability to classify user intent effectively. ### Training Stats - Evaluation Loss: 0.011744413524866104 - Evaluation Accuracy: 0.9986976744186047 - Evaluation Runtime: 3.1136 - Evaluation Samples per Second: 1726.29 - Evaluation Steps per Second: 215.826 ## Responsible Usage It is essential to use this model responsibly and ethically, adhering to content guidelines and applicable regulations when implementing it in real-world applications, particularly those involving potentially sensitive content. ## References - [Hugging Face Model Hub](https://huggingface.co/models) - [DistilBERT Paper](https://arxiv.org/abs/1910.01108) **Disclaimer:** The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets.
bohdansydor/LunarLander
bohdansydor
2023-12-09T14:39:06Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T14:38:32Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 235.02 +/- 14.90 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
TokenBender/pic_7B_mistral_Full_v0.2
TokenBender
2023-12-09T14:35:24Z
8
9
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "dataset:Open-Orca/SlimOrca", "dataset:HuggingFaceH4/no_robots", "dataset:Intel/orca_dpo_pairs", "dataset:rizerphe/glaive-function-calling-v2-zephyr", "dataset:codefuse-ai/Evol-instruction-66k", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-04T18:39:14Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 datasets: - Open-Orca/SlimOrca - HuggingFaceH4/no_robots - Intel/orca_dpo_pairs - rizerphe/glaive-function-calling-v2-zephyr - codefuse-ai/Evol-instruction-66k library_name: transformers pipeline_tag: text-generation --- # pic_7B_mistral_Full_v0.2 PIC_7B_Mistral (First phase) This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) A curated, decontaminated subset of datasets used have been mentioned in the model card. All used datasets are public as of the time of release of this model. Collaborate or Consult me - [Twitter](https://twitter.com/4evaBehindSOTA), [Discord](https://discord.gg/ftEM63pzs2) *Recommended format is ChatML, Alpaca will work but take care of EOT token* #### Chat Model Inference ## Model description First generic model of Project PIC (Partner-in-Crime) in 7B range. Trying a bunch of things and seeing what sticks right now. Empathy + Coder + Instruction/json/function adherence is my game. Finding lots of challenges and insights in this effort, patience is key. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6398bf222da24ee95b51c8d8/Fl-pRCsC_lvnuoP734hsJ.png) ## Intended uses & limitations Should be useful in generic capacity. Demonstrates little bit of everything. Basic tests in - Roleplay: Adherence to character present. json/function-calling: Passing Coding: To be evaluated ## Training procedure SFT + DPO ### Training results Humaneval and evalplus results to be shared as well. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6398bf222da24ee95b51c8d8/Ysat2Hi3MInSKSlUrft-i.png) ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1 - Datasets 2.15.0 - Tokenizers 0.15.0
jfachrel/arabic-building
jfachrel
2023-12-09T14:33:33Z
11
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-09T14:08:21Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Arabic Building Dreambooth model trained with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via Colab [fast-Colab](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/jfachrel/arabic-building/resolve/main/sample_images/sheikhzayed_grand_mosque_(2).jpeg) Sheikh Zayed Grand Mosque ![1](https://huggingface.co/jfachrel/arabic-building/resolve/main/sample_images/al_aqsa_(2).jpeg) Al-Aqsa ![2](https://huggingface.co/jfachrel/arabic-building/resolve/main/sample_images/nizwa_fort_(2).jpeg) Nizwa Fort ![3](https://huggingface.co/jfachrel/arabic-building/resolve/main/sample_images/bastakia's_wind_tower_(1).jpeg) Bastakia's Wind Tower ![4](https://huggingface.co/jfachrel/arabic-building/resolve/main/sample_images/nasir_al_mulk_mosque_(1).jpeg) Nasir Al-mulk Mosque ![8](https://huggingface.co/jfachrel/arabic-building/resolve/main/sample_images/moroccan_fountain_(2).jpg) Moroccan Fountain
mlrookie2/sd_tflite
mlrookie2
2023-12-09T14:23:23Z
0
0
null
[ "license:mit", "region:us" ]
null
2023-12-09T11:44:19Z
--- license: mit --- Privacy Policy for Local Stable Diffusion Last Updated: 10/12/2023 Introduction Local Stable Diffusion is committed to respecting the privacy of its users. Our app, Local Stable Diffusion, is designed with your privacy in mind. This Privacy Policy outlines our practices regarding the collection, use, and disclosure of information that we receive through our app. Information We Collect Our app does not collect any personal data from its users. We do not require users to provide personal information such as name, email address, or phone number for using the basic features of our app. Use of Advertising ID However, our app uses Google AdMob to serve advertisements. AdMob may use the Advertising ID from the device on which the app is installed. The Advertising ID is used for targeted advertising purposes, which helps deliver ads relevant to your interests. The Advertising ID is a unique, user-resettable ID for advertising, provided by Google Play services. It gives users better controls and provides developers with a simple, standard system to continue to monetize their apps. It can be reset or opted out of interest-based ads within the settings of your device. Data Security We take the privacy and security of our users seriously and take reasonable measures to protect any information from unauthorized access or use. However, please be aware that no method of electronic transmission or storage is 100% secure. Changes to Our Privacy Policy We may update this Privacy Policy from time to time. We will notify you of any changes by posting the new Privacy Policy on this page. You are advised to review this Privacy Policy periodically for any changes. Contact Us If you have any questions about this Privacy Policy, please contact us at epifrenetic@gmail.com.
Kikia26/Fine_tune_PubMedBert
Kikia26
2023-12-09T14:09:36Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext", "base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-09T12:17:03Z
--- license: mit base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: Fine_tune_PubMedBert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Fine_tune_PubMedBert This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4669 - Precision: 0.6359 - Recall: 0.7044 - F1: 0.6684 - Accuracy: 0.8802 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.8690 | 0.3548 | 0.0401 | 0.0721 | 0.7691 | | No log | 2.0 | 22 | 0.6036 | 0.6005 | 0.4635 | 0.5232 | 0.8468 | | No log | 3.0 | 33 | 0.4788 | 0.6160 | 0.5912 | 0.6034 | 0.8678 | | No log | 4.0 | 44 | 0.4621 | 0.5331 | 0.6898 | 0.6014 | 0.8611 | | No log | 5.0 | 55 | 0.4319 | 0.5795 | 0.6916 | 0.6306 | 0.8681 | | No log | 6.0 | 66 | 0.4444 | 0.5754 | 0.7099 | 0.6356 | 0.8694 | | No log | 7.0 | 77 | 0.4472 | 0.6069 | 0.7099 | 0.6543 | 0.8756 | | No log | 8.0 | 88 | 0.4556 | 0.6227 | 0.6898 | 0.6545 | 0.8786 | | No log | 9.0 | 99 | 0.4613 | 0.6118 | 0.7190 | 0.6611 | 0.8767 | | No log | 10.0 | 110 | 0.4669 | 0.6359 | 0.7044 | 0.6684 | 0.8802 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Di99di/bert-base-cased-finetuned-wikitext2
Di99di
2023-12-09T13:58:54Z
5
0
transformers
[ "transformers", "tf", "tensorboard", "bert", "fill-mask", "generated_from_keras_callback", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-12-09T13:47:58Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_keras_callback model-index: - name: Di99di/bert-base-cased-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Di99di/bert-base-cased-finetuned-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 7.4291 - Validation Loss: 7.0505 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 7.4291 | 7.0505 | 0 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
JiaweiGuan/twitter_disaster__home_guanjiawei_AIGC_cache_huggingface_hub_Llama-2-7b-hf_LORA_SEQ_CLS_AttNorm
JiaweiGuan
2023-12-09T13:58:33Z
0
0
peft
[ "peft", "safetensors", "region:us" ]
null
2023-12-09T13:52:17Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
toothlessjw/fiction
toothlessjw
2023-12-09T13:57:55Z
2
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "region:us" ]
text-to-image
2023-12-09T13:57:51Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/KakaoTalk_20231208_201431813.png base_model: runwayml/stable-diffusion-v1-5 instance_prompt: null --- # fiction <Gallery /> ## Model description fiction ## Download model Weights for this model are available in Safetensors format. [Download](/toothlessjw/fiction/tree/main) them in the Files & versions tab.
JiaweiGuan/twitter_disaster__home_guanjiawei_AIGC_cache_huggingface_hub_Llama-2-7b-hf_LORA_SEQ_CLS
JiaweiGuan
2023-12-09T13:37:15Z
1
1
peft
[ "peft", "safetensors", "region:us" ]
null
2023-12-09T13:37:05Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
huutmmt/imagetotext
huutmmt
2023-12-09T13:29:15Z
0
0
null
[ "code", "image-to-text", "vi", "en", "arxiv:1910.09700", "license:bsd-3-clause", "region:us" ]
image-to-text
2023-12-09T13:25:59Z
--- license: bsd-3-clause language: - vi - en metrics: - accuracy pipeline_tag: image-to-text tags: - code --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kyujinpy/Ko-PlatYi-6B-O
kyujinpy
2023-12-09T13:20:39Z
2,244
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-03T21:16:55Z
--- language: - ko datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- # **Ko-PlatYi-6B-O** <img src='./Ko-PlatYi.png' width=256> ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Ko-PlatYi-6B-O is an auto-regressive language model based on the Yi-34B transformer architecture. **Blog Link** Blog: [Coming soon...] Github: [Coming soon...] **Base Model** [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) **Training Dataset** [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3). # **Model Benchmark** ## Open leaderboard > Follow up as [link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | CommonGen-V2 | | --- | --- | --- | --- | --- | --- | --- | | **Ko-PlatYi-6B-O** | 49.00 | 43.52 | 53.59 | 47.47 | 41.01 | 59.39 | | Ko-PlatYi-6B-kiwi | 48.75 | 41.98 | 53.61 | 46.10 | 38.30 | 63.75 | | Ko-PlatYi-6B-gu | 48.76 | 42.75 | 54.00 | 44.66 | 41.22 | 61.16 | | Ko-PlatYi-6B | 49.97 | 43.00 | 53.55 | 46.50 | 40.31 | 66.47 | | Yi-Ko-6B | 48.79 | 41.04 | 53.39 | 46.28 | 41.64 | 61.63 | --- ## AI-Harness Evaluation > AI-Harness evaluation; [link](https://github.com/Beomi/ko-lm-evaluation-harness) | Model | BoolQ | Copa | HellaSwag | Sentineg | | --- | --- | --- | --- | --- | | | *Zero-shot* |||| | **Ko-PlatYi-6B-O** | 0.3343 | 0.7687 | 0.4833 | 0.5794 | | Ko-PlatYi-6B-kiwi | 0.3343 | 0.7665 | 0.4746 | **0.6248** | | Ko-PlatYi-6B-gu | **0.7077** | **0.7696** | 0.4797 | 0.3979 | | Ko-PlatYi-6B | 0.3343 | 0.7684 | **0.4917** | 0.5226 | | Yi-Ko-6B | **0.7070** | 0.7696 | **0.5009** | 0.4044 | --- # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/Ko-PlatYi-6B-O" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ```
uukuguy/speechless-code-mistral-7b-v2.0
uukuguy
2023-12-09T13:18:53Z
1,425
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "code", "en", "dataset:jondurbin/airoboros-2.2", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:TokenBender/python_eval_instruct_51k", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:meta-math/MetaMathQA", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T13:10:23Z
--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - jondurbin/airoboros-2.2 - Open-Orca/OpenOrca - garage-bAInd/Open-Platypus - WizardLM/WizardLM_evol_instruct_V2_196k - TokenBender/python_eval_instruct_51k - ise-uiuc/Magicoder-OSS-Instruct-75K - meta-math/MetaMathQA tags: - code license: apache-2.0 model-index: - name: SpeechlessCoder results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: verified: false --- <p><h1> speechless-code-mistral-7b-v2.0 </h1></p> Code: https://github.com/uukuguy/speechless Use the following dataset to fine-tune mistralai/Mistral-7B-v0.1 in order to improve the model's reasoning and planning abilities. Total 343,370 samples 603 MB - jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 21,923 samples. - Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 62,973 samples. - garage-bAInd/Open-Platypus: 100%, 22,760 samples. - WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,077 samples - TokenBender/python_eval_instruct_51k: “python” in output .39,596 samples - OpenHermes code block in output 18,969 samples - CollectiveCognition-2023-09-27 200 samples - ise-uiuc/Magicoder-OSS-Instruct-75K 75,197 samples - meta-math/MetaMathQA 20% 395K 71,706 samples ## HumanEval | Metric | Value | | --- | --- | | humaneval-python | | [Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard) CodeLlama-34B-Python: 53.29 CodeLlama-34B-Instruct: 50.79 CodeLlama-13B-Instruct: 50.6 CodeLlama-34B: 45.11 CodeLlama-13B-Python: 42.89 CodeLlama-13B: 35.07 ## lm-evaluation-harness [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | Metric | Value | | --- | --- | | ARC || | HellaSwag | | | MMLU | | | TruthfulQA | | | Average | |
oguuzhansahin/bi-encoder-mnrl-dbmdz-bert-base-turkish-cased-margin_3.0-msmarco-tr-10k
oguuzhansahin
2023-12-09T13:13:54Z
344
10
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "retrieval", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-12-09T12:52:44Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - retrieval - transformers --- # oguuzhansahin/bi-encoder-mnrl-dbmdz-bert-base-turkish-cased-margin_3.0-msmarco-tr-10k This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer, util model = SentenceTransformer('oguuzhansahin/bi-encoder-mnrl-dbmdz-bert-base-turkish-cased-margin_3.0-msmarco-tr-10k') query = "İstanbul'un nüfusu kaçtır?" sentences = ["İstanbul'da yaşayan insan sayısı 15 milyonu geçmiştir", "Londra'nın nüfusu yaklaşık 9 milyondur.", "İstanbul'da hayat çok zor."] query_embedding = model.encode(query, convert_to_tensor=True) sentence_embeddings = model.encode(sentences, show_progress_bar=True) #Compute dot score between query and all document embeddings scores = util.dot_score(query_embedding, sentence_embeddings)[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(sentences, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ## Expected Output: 400.1816711425781 | İstanbul'da yaşayan insan sayısı 15 milyonu geçmiştir 309.97796630859375 | Londra'nın nüfusu yaklaşık 9 milyondur. 133.04507446289062 | İstanbul'da hayat çok zor. ``` ## Evaluation Results <!--- Describe how your model was evaluated --> Evaluated on 10k query translated MSMARCO dev dataset. | epoch | steps | cos_sim-Accuracy@1 | cos_sim-Accuracy@3 | cos_sim-Accuracy@5 | cos_sim-Accuracy@10 | cos_sim-Precision@1 | cos_sim-Recall@1 | cos_sim-Precision@3 | cos_sim-Recall@3 | cos_sim-Precision@5 | cos_sim-Recall@5 | cos_sim-Precision@10 | cos_sim-Recall@10 | cos_sim-MRR@10 | cos_sim-NDCG@10 | cos_sim-MAP@100 | dot_score-Accuracy@1 | dot_score-Accuracy@3 | dot_score-Accuracy@5 | dot_score-Accuracy@10 | dot_score-Precision@1 | dot_score-Recall@1 | dot_score-Precision@3 | dot_score-Recall@3 | dot_score-Precision@5 | dot_score-Recall@5 | dot_score-Precision@10 | dot_score-Recall@10 | dot_score-MRR@10 | dot_score-NDCG@10 | dot_score-MAP@100 | |-------|-------|--------------------|--------------------|--------------------|---------------------|---------------------|--------------------|---------------------|--------------------|---------------------|--------------------|----------------------|--------------------|--------------------|--------------------|--------------------|----------------------|----------------------|----------------------|-----------------------|-----------------------|--------------------|-----------------------|--------------------|-----------------------|--------------------|------------------------|---------------------|--------------------|--------------------|--------------------| | 0 | 500 | 0.6525787965616046 | 0.7808022922636103 | 0.8197707736389684 | 0.8611747851002866 | 0.6525787965616046 | 0.6301575931232092 | 0.27277936962750715 | 0.7720630372492837 | 0.17286532951289396 | 0.8130730659025788 | 0.0912320916905444 | 0.8564828080229226 | 0.7247057352071669 | 0.7540179789445202 | 0.7229577384633034 | 0.5883954154727794 | 0.7340974212034383 | 0.7799426934097421 | 0.833810888252149 | 0.5883954154727794 | 0.5672158548233047 | 0.25577841451766953 | 0.7242956064947469 | 0.16421203438395413 | 0.772648042024833 | 0.08810888252148998 | 0.82774594078319 | 0.6712877495792965 | 0.7060157817761727 | 0.6695710889515925 | | 0 | 1000 | 0.6911174785100287 | 0.8101719197707736 | 0.8435530085959886 | 0.8846704871060171 | 0.6911174785100287 | 0.6672397325692454 | 0.2833810888252149 | 0.8015042979942694 | 0.17822349570200574 | 0.837440305635148 | 0.09386819484240688 | 0.880730659025788 | 0.757661117933323 | 0.7848392425365591 | 0.7556074534364394 | 0.6309455587392551 | 0.767621776504298 | 0.8137535816618912 | 0.8601719197707737 | 0.6309455587392551 | 0.6086198662846226 | 0.2671919770773639 | 0.7572349570200573 | 0.17160458452722066 | 0.8068767908309455 | 0.091189111747851 | 0.855336676217765 | 0.7088129349160859 | 0.7412798293491312 | 0.7066932344452895 | | 0 | 1500 | 0.7151862464183381 | 0.8306590257879656 | 0.8608882521489971 | 0.897134670487106 | 0.7151862464183381 | 0.6912488061127029 | 0.29054441260744984 | 0.8222898758357211 | 0.18183381088825215 | 0.8549665711556829 | 0.09510028653295129 | 0.8929560649474689 | 0.7788507981989347 | 0.8039875824511752 | 0.7766051282738895 | 0.6584527220630373 | 0.7901146131805158 | 0.8308022922636104 | 0.8744985673352436 | 0.6584527220630373 | 0.636162846227316 | 0.27569245463228276 | 0.7805157593123209 | 0.1749856733524355 | 0.8235315186246418 | 0.09257879656160459 | 0.8693051575931232 | 0.7328220653113194 | 0.7630103337467442 | 0.7306729678612995 | | 0 | -1 | 0.7299426934097422 | 0.8385386819484241 | 0.8677650429799427 | 0.9012893982808023 | 0.7299426934097422 | 0.705730659025788 | 0.2936007640878701 | 0.8304560649474689 | 0.18349570200573068 | 0.8623089780324737 | 0.09554441260744985 | 0.897015281757402 | 0.7901240505753392 | 0.8135626197561437 | 0.787830493935352 | 0.6787965616045846 | 0.8031518624641834 | 0.8373925501432665 | 0.882378223495702 | 0.6787965616045846 | 0.6561007640878702 | 0.2801814708691499 | 0.7933022922636103 | 0.17653295128939825 | 0.8303724928366762 | 0.09348137535816618 | 0.8777340019102197 | 0.748256185473233 | 0.7767303860204461 | 0.7458413737625622 | | 1 | 500 | 0.7329512893982808 | 0.8422636103151863 | 0.8755014326647564 | 0.9061604584527221 | 0.7329512893982808 | 0.7083810888252149 | 0.2947946513849093 | 0.8340138490926455 | 0.1848710601719198 | 0.8693051575931232 | 0.0961031518624642 | 0.9022683858643744 | 0.7940033883658509 | 0.8177562178760835 | 0.7914392824209506 | 0.6802292263610316 | 0.8078796561604584 | 0.8454154727793697 | 0.8848137535816619 | 0.6802292263610316 | 0.6566738299904489 | 0.282378223495702 | 0.7988658070678127 | 0.17819484240687677 | 0.8380850047755491 | 0.09369627507163324 | 0.8799546322827124 | 0.7516260744985672 | 0.7798058371179187 | 0.7490502934740975 | | 1 | 1000 | 0.729512893982808 | 0.841404011461318 | 0.875214899713467 | 0.9113180515759313 | 0.729512893982808 | 0.7050859598853868 | 0.29450811843361985 | 0.8335124164278891 | 0.18469914040114613 | 0.8690902578796562 | 0.09656160458452723 | 0.9071155682903534 | 0.7932278164369851 | 0.8182219363350265 | 0.7903662439052012 | 0.6843839541547277 | 0.8101719197707736 | 0.8532951289398281 | 0.8925501432664756 | 0.6843839541547277 | 0.6612106017191977 | 0.2825214899713467 | 0.8005969436485195 | 0.17965616045845273 | 0.8460721107927411 | 0.09448424068767908 | 0.8879417382999044 | 0.7560514167462585 | 0.7849882443395625 | 0.7531013420305113 | | 1 | 1500 | 0.7449856733524355 | 0.8524355300859598 | 0.8818051575931232 | 0.9164756446991404 | 0.7449856733524355 | 0.7202960840496656 | 0.298567335243553 | 0.8447110792741165 | 0.18616045845272208 | 0.8759789875835721 | 0.09730659025787966 | 0.9132282712511939 | 0.8056443011779678 | 0.8294354661493777 | 0.8032045174854995 | 0.7004297994269341 | 0.8217765042979943 | 0.8593123209169055 | 0.9007163323782235 | 0.7004297994269341 | 0.6766833810888252 | 0.28705826170009546 | 0.8130014326647564 | 0.181432664756447 | 0.8531518624641834 | 0.09545845272206303 | 0.8966212989493791 | 0.7692333537999718 | 0.7972424077272082 | 0.7664284653213875 | | 1 | -1 | 0.7343839541547278 | 0.8487106017191977 | 0.877650429799427 | 0.9116045845272206 | 0.7343839541547278 | 0.7101599808978032 | 0.29727793696275073 | 0.8408667621776504 | 0.18521489971346708 | 0.8716929321872016 | 0.09663323782234957 | 0.9077722063037249 | 0.7974350752717504 | 0.8218012152154055 | 0.7950416352280592 | 0.6871060171919771 | 0.8141833810888253 | 0.8520057306590257 | 0.8941260744985673 | 0.6871060171919771 | 0.663562559694365 | 0.2840974212034384 | 0.8046561604584527 | 0.17979942693409742 | 0.8456064947468959 | 0.09461318051575933 | 0.8893147086914994 | 0.7588650111429464 | 0.7873455619046803 | 0.7557920076739941 | | 2 | 500 | 0.7253581661891118 | 0.8373925501432665 | 0.872349570200573 | 0.9060171919770774 | 0.7253581661891118 | 0.701098376313276 | 0.29326647564469915 | 0.8291666666666666 | 0.18426934097421205 | 0.8663920725883476 | 0.09597421203438397 | 0.9019102196752626 | 0.788637035520988 | 0.8134114908135215 | 0.7859345726437968 | 0.6726361031518625 | 0.8038681948424069 | 0.844269340974212 | 0.8862464183381089 | 0.6726361031518625 | 0.6494269340974212 | 0.2805157593123209 | 0.79420964660936 | 0.17810888252148996 | 0.8373686723973256 | 0.09383954154727796 | 0.8815902578796562 | 0.7467288056578876 | 0.7764423334792536 | 0.7442459199666945 | | 2 | 1000 | 0.7346704871060172 | 0.845702005730659 | 0.8765042979942693 | 0.9106017191977077 | 0.7346704871060172 | 0.710458452722063 | 0.29613180515759313 | 0.8375477554918815 | 0.18521489971346708 | 0.8708333333333332 | 0.09654727793696276 | 0.9069484240687679 | 0.7967844635466406 | 0.821080594029586 | 0.7944073081188138 | 0.6851002865329513 | 0.8143266475644699 | 0.8484240687679083 | 0.8919770773638969 | 0.6851002865329513 | 0.6620224450811844 | 0.2843361986628462 | 0.8052411652340019 | 0.1791977077363897 | 0.8421561604584528 | 0.09445558739255014 | 0.8876313276026743 | 0.7576274048301268 | 0.786374643230553 | 0.7553079432262348 | | 2 | 1500 | 0.7351002865329513 | 0.8478510028653296 | 0.8796561604584527 | 0.9114613180515759 | 0.7351002865329513 | 0.7105300859598854 | 0.2967526265520534 | 0.8396251193887296 | 0.18578796561604588 | 0.8738419293218719 | 0.09664756446991406 | 0.9078199617956064 | 0.7969524036930906 | 0.8213578318779787 | 0.7944409556338867 | 0.6955587392550143 | 0.8191977077363897 | 0.8571633237822349 | 0.8965616045845273 | 0.6955587392550143 | 0.6719436485195798 | 0.2864374403056351 | 0.8105659025787966 | 0.1806590257879656 | 0.8503939828080229 | 0.09497134670487108 | 0.8924188156638013 | 0.7651858484559056 | 0.7931668058208581 | 0.7625478945876472 | | 2 | -1 | 0.7330945558739255 | 0.8459885386819485 | 0.8796561604584527 | 0.9098853868194843 | 0.7330945558739255 | 0.7082975167144221 | 0.29613180515759313 | 0.8375955109837631 | 0.18564469914040113 | 0.8733882521489972 | 0.09648997134670487 | 0.9058978032473733 | 0.7952313867285201 | 0.8194890404298979 | 0.7924671105318537 | 0.6892550143266476 | 0.8156160458452723 | 0.8537249283667622 | 0.8951289398280803 | 0.6892550143266476 | 0.6661771728748805 | 0.28486150907354346 | 0.8061127029608404 | 0.17994269340974212 | 0.8468003820439349 | 0.09478510028653296 | 0.8905085959885387 | 0.760438554600445 | 0.7890338697308207 | 0.7575932457133956 | | 3 | 500 | 0.7280802292263611 | 0.8458452722063037 | 0.880515759312321 | 0.9094555873925502 | 0.7280802292263611 | 0.703784622731614 | 0.29613180515759313 | 0.8376790830945559 | 0.18595988538681948 | 0.8747254059216809 | 0.09637535816618911 | 0.9053008595988539 | 0.7930108700595786 | 0.8179371983031188 | 0.7906095180992412 | 0.686676217765043 | 0.811461318051576 | 0.8525787965616046 | 0.8967048710601719 | 0.686676217765043 | 0.663932664756447 | 0.28371537726838586 | 0.8026146131805157 | 0.17974212034383952 | 0.8457497612225405 | 0.09492836676217767 | 0.8921203438395415 | 0.7590517123754944 | 0.7884945147622646 | 0.7564213901145882 | | 3 | 1000 | 0.727650429799427 | 0.8452722063037249 | 0.8782234957020058 | 0.9094555873925502 | 0.727650429799427 | 0.7037010506208213 | 0.29574976122254054 | 0.8370702005730659 | 0.1853295128939828 | 0.8717884431709646 | 0.09638968481375358 | 0.9054680038204392 | 0.79231966616637 | 0.8173548182315657 | 0.7897494667720486 | 0.6872492836676217 | 0.8177650429799427 | 0.8541547277936963 | 0.8945558739255014 | 0.6872492836676217 | 0.6640998089780323 | 0.2857688634192932 | 0.8089541547277938 | 0.17997134670487105 | 0.8471466093600765 | 0.09465616045845272 | 0.8896251193887297 | 0.7590795128939827 | 0.7878020986215141 | 0.7562928001653756 | | 3 | 1500 | 0.730945558739255 | 0.8478510028653296 | 0.8787965616045845 | 0.9100286532951289 | 0.730945558739255 | 0.7067215854823303 | 0.2968481375358166 | 0.8397683858643745 | 0.18535816618911177 | 0.8723376313276026 | 0.09643266475644699 | 0.9059216809933142 | 0.7947027220630363 | 0.8191206005600553 | 0.7918770498713639 | 0.6895415472779369 | 0.8153295128939828 | 0.8535816618911175 | 0.8949856733524355 | 0.6895415472779369 | 0.6665353390639923 | 0.2848137535816619 | 0.8062440305635148 | 0.17988538681948424 | 0.8465616045845272 | 0.09462750716332378 | 0.8898758357211078 | 0.7605133715377266 | 0.7888842917894296 | 0.7576483206453933 | | 3 | -1 | 0.7319484240687679 | 0.8492836676217765 | 0.8813753581661891 | 0.9106017191977077 | 0.7319484240687679 | 0.7076528175740209 | 0.2973734479465138 | 0.8414517669531996 | 0.1860458452722063 | 0.8752745940783189 | 0.09650429799426935 | 0.9064469914040114 | 0.7956879064901968 | 0.8201540152375801 | 0.7930877726771091 | 0.6893982808022923 | 0.8177650429799427 | 0.856160458452722 | 0.8977077363896848 | 0.6893982808022923 | 0.6661771728748805 | 0.2854823304680038 | 0.808416905444126 | 0.18037249283667622 | 0.8491762177650429 | 0.09491404011461319 | 0.8925501432664756 | 0.761433460681311 | 0.7901828953258867 | 0.7583172945055513 | | 4 | 500 | 0.729512893982808 | 0.8436962750716333 | 0.876647564469914 | 0.9101719197707736 | 0.729512893982808 | 0.705241165234002 | 0.29541547277936964 | 0.8356733524355301 | 0.18510028653295132 | 0.8706064947468959 | 0.09653295128939827 | 0.9063514804202483 | 0.7933414062855317 | 0.8183534981698449 | 0.7908415471359164 | 0.6862464183381088 | 0.8141833810888253 | 0.8527220630372493 | 0.895272206303725 | 0.6862464183381088 | 0.6631924546322827 | 0.2843839541547278 | 0.805002387774594 | 0.17977077363896848 | 0.8458452722063037 | 0.09471346704871061 | 0.8904250238777458 | 0.7585354982489648 | 0.7875293449629553 | 0.7557095120190159 | | 4 | 1000 | 0.7293696275071633 | 0.8426934097421204 | 0.8772206303724929 | 0.9107449856733524 | 0.7293696275071633 | 0.7051337153772683 | 0.2950334288443171 | 0.8346704871060172 | 0.18518624641833814 | 0.8710840496657115 | 0.09651862464183382 | 0.9066738299904489 | 0.7926533860917803 | 0.8177753364741875 | 0.7898442183283092 | 0.6889684813753582 | 0.8131805157593123 | 0.8531518624641834 | 0.8955587392550143 | 0.6889684813753582 | 0.6657234957020057 | 0.2836676217765043 | 0.803569723018147 | 0.17968481375358167 | 0.8459646609360076 | 0.09471346704871061 | 0.8904727793696275 | 0.760148951653249 | 0.7886659671781766 | 0.7571659283553608 | | 4 | 1500 | 0.7326647564469914 | 0.8435530085959886 | 0.8787965616045845 | 0.9117478510028654 | 0.7326647564469914 | 0.708416905444126 | 0.29546322827125115 | 0.835792741165234 | 0.1854441260744986 | 0.872743553008596 | 0.09659025787965617 | 0.9076528175740209 | 0.7948749260926895 | 0.81981561179438 | 0.7922047206136493 | 0.6889684813753582 | 0.8173352435530086 | 0.8537249283667622 | 0.895272206303725 | 0.6889684813753582 | 0.6657712511938872 | 0.28529130850047757 | 0.8080826170009551 | 0.17988538681948424 | 0.8468839541547278 | 0.09472779369627508 | 0.8905085959885387 | 0.7611654045572382 | 0.7896038729003526 | 0.7582836411869348 | | 4 | -1 | 0.730945558739255 | 0.8429799426934097 | 0.8773638968481375 | 0.9127507163323783 | 0.730945558739255 | 0.7066977077363897 | 0.29531996179560643 | 0.8353629417383 | 0.1851289398280802 | 0.8711318051575931 | 0.09667621776504297 | 0.9084885386819485 | 0.7939326079046694 | 0.8191584665873488 | 0.7910064252106939 | 0.6905444126074498 | 0.8163323782234957 | 0.852865329512894 | 0.8951289398280803 | 0.6905444126074498 | 0.6674426934097422 | 0.2849570200573066 | 0.8070319961795606 | 0.17971346704871058 | 0.8460004775549188 | 0.09469914040114613 | 0.89024594078319 | 0.7616585937144678 | 0.7898879131897266 | 0.7588026826359487 | | 5 | 500 | 0.7292263610315186 | 0.8416905444126075 | 0.877650429799427 | 0.9106017191977077 | 0.7292263610315186 | 0.7048591212989493 | 0.29493791786055396 | 0.8342048710601719 | 0.18530085959885387 | 0.8718361986628461 | 0.09646131805157594 | 0.9063395415472779 | 0.7927187656342373 | 0.817777858898932 | 0.7899427304931261 | 0.6893982808022923 | 0.8153295128939828 | 0.8530085959885387 | 0.8934097421203439 | 0.6893982808022923 | 0.6660100286532952 | 0.28486150907354346 | 0.8064231136580706 | 0.17965616045845273 | 0.845857211079274 | 0.09455587392550144 | 0.8887416427889206 | 0.7603872174466731 | 0.7884751772347413 | 0.7574401114646502 || 5 | 1000 | 0.7297994269340974 | 0.8419770773638968 | 0.8769340974212034 | 0.9084527220630373 | 0.7297994269340974 | 0.7054560649474689 | 0.29489016236867244 | 0.8342168099331423 | 0.18518624641833809 | 0.8712870105062083 | 0.09623209169054442 | 0.9041905444126075 | 0.7925137012780262 | 0.817151826387473 | 0.7899257058194014 | 0.6905444126074498 | 0.8177650429799427 | 0.8540114613180516 | 0.8934097421203439 | 0.6905444126074498 | 0.6670487106017192 | 0.2855300859598854 | 0.8086198662846227 | 0.17991404011461318 | 0.8471466093600764 | 0.09454154727793698 | 0.8887774594078318 | 0.7611970141447217 | 0.7890603185208098 | 0.758136309854029 | | 5 | 1500 | 0.7293696275071633 | 0.8408309455587393 | 0.8772206303724929 | 0.9093123209169054 | 0.7293696275071633 | 0.7051695319961795 | 0.29446036294173833 | 0.8329274116523401 | 0.18515759312320915 | 0.871191499522445 | 0.09634670487106017 | 0.9051695319961796 | 0.7923364942920804 | 0.8171934443384362 | 0.7895821477901567 | 0.689971346704871 | 0.8170487106017192 | 0.8537249283667622 | 0.8929799426934097 | 0.689971346704871 | 0.666583094555874 | 0.28529130850047757 | 0.80792741165234 | 0.17982808022922636 | 0.8467645654250239 | 0.09449856733524356 | 0.8882521489971347 | 0.760993769045345 | 0.7887928737935377 | 0.7579401229598806 | | 5 | -1 | 0.729512893982808 | 0.8409742120343839 | 0.8770773638968481 | 0.9093123209169054 | 0.729512893982808 | 0.7053127984718243 | 0.2945081184336199 | 0.8330706781279849 | 0.18515759312320915 | 0.8711198662846227 | 0.09634670487106017 | 0.9051695319961796 | 0.7923945968072029 | 0.8172376494572229 | 0.789643206301542 | 0.689971346704871 | 0.8170487106017192 | 0.8537249283667622 | 0.8928366762177651 | 0.689971346704871 | 0.666583094555874 | 0.28529130850047757 | 0.80792741165234 | 0.17982808022922636 | 0.8467645654250239 | 0.09448424068767908 | 0.88810888252149 | 0.7609504479919952 | 0.788727215652273 | 0.7579130219416423 | ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 311 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 500, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
dim/tiny-llama-2T-open-orca-ru-5000-step
dim
2023-12-09T13:01:07Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T11:38:54Z
```python model_name = "dim/tiny-llama-2T-open-orca-ru-5000-step" tokenizer_path = "dim/tiny-llama-2T-open-orca-ru-5000-step" tokenizer = AutoTokenizer.from_pretrained(tokenizer_path) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map={"": 0}, ) generation_config = GenerationConfig( bos_token_id=1, eos_token_id=32000, # pad_token_id=32000, pad_token_id=2, max_new_tokens=512, repetition_penalty=1.0, ) def generate_orca_ru(instruction): system = "Вы помощник ИИ, который отвечает на вопросы." prompt = [ { "role": "system", "content": system, }, {"role": "user", "content": instruction}, ] prompt = tokenizer.apply_chat_template( prompt, tokenize=False, add_generation_prompt=True, ) # prompt = f"<|im_start|>user\n{instruction}<|im_end|>\n<|im_start|>assistant\n" def generate(model, tokenizer, prompt, generation_config): data = tokenizer(prompt, return_tensors="pt", add_special_tokens=False) # data = tokenizer(prompt, return_tensors="pt", add_special_tokens=True) data = {k: v.to(model.device) for k, v in data.items()} output_ids = model.generate(**data, generation_config=generation_config)[0] output_ids = output_ids[len(data["input_ids"][0]) :] # output_ids = output_ids[:] # output = tokenizer.decode(output_ids, skip_special_tokens=False) output = tokenizer.decode(output_ids, skip_special_tokens=True) return output.strip() return generate( model=model, tokenizer=tokenizer, prompt=prompt, generation_config=generation_config, ) print(generate_orca_ru(instruction="Напиши алгоритм как погладить котика")) print(generate_orca_ru(instruction="Почему трава зеленая?")) ``` ```text Напишите интересный пост в блоге о недавней поездке на Гавайи, рассказывая о культурном опыте и достопримечательностях, которые обязательно нужно увидеть. Недавно я посетил Гавайи и был поражен разнообразием культур, которые я там нашел. Гавайцы — это уникальная и яркая группа людей, которые очень гордятся своим наследием и традициями. Во время моей поездки я посетил несколько мест, которые обязательно нужно посетить. Во-первых, я посетил пляж Хонау на Гавайях. Этот пляж известен своей потрясающей природной красотой и кристально чистой водой. Я также посетил пляж Капалуа на Гавайях, который известен своими потрясающими пейзажами и кристально чистой водой. Еще одно место, которое я посетил, было Национальный памятник Хана'ala на Гавайях. Этот памятник является домом для культового дерева Хана'ала, которое является символом гавайской культуры и истории. Я также посетил Национальный памятник Капалуа на Гавайях, который известен своими потрясающими пейзажами и кристально чистой водой. Помимо этих мест, я также посетил Национальный памятник Вайлеа на Гавайях. Этот памятник является домом для культового дерева Вайлеа, которое является символом гавайской культуры и истории. Я также посетил Национальный памятник Капалуа на Гавайях, который известен своими потрясающими пейзажами и кристально чистой водой. В целом, моя поездка на Гавайи была невероятным опытом, и я очень рекомендую его всем, кто хочет увидеть уникальную и яркую культуру Гавайев. Я обязательно вернусь сюда снова в ближайшее время. ========== ========== Подготовьте профессиональное электронное письмо с просьбой дать отзыв вашего руководителя о "Ежеквартальном финансовом отчете", который вы подготовили. Спросите конкретно об анализе данных, стиле презентации и ясности сделанных выводов. Сохраняйте электронное письмо коротким, пишите по существу. Уважаемый [Имя вашего руководителя], Я надеюсь это письмо благополучно дойдет до вас. Я пишу, чтобы попросить вашего мнения о «Ежеквартальном финансовом отчете», который я подготовил. Как вы, возможно, знаете, я провел обширное исследование и анализ данных, чтобы создать всеобъемлющий и информативный отчет. Во-первых, я хотел бы выразить свою признательность за ваши ценные отзывы. Ваш вклад поможет мне улучшить отчет и предоставить вам ценную информацию. Я понимаю, что ваши отзывы могут быть незначительными, но я считаю, что они окажут значительное влияние на конечный продукт. Что касается анализа данных, я был особенно впечатлен качеством и количеством данных, представленных в отчете. Выделенные моменты и статистические данные были хорошо организованы и легко понятны. Я также заметил, что в отчете был представлен ряд точек зрения, что позволило мне глубже понять различные аспекты финансового отчета. Что касается стиля презентации, я считаю, что он был эффективным и информативным. Я оценил использование визуальных элементов, таких как графики и диаграммы, которые помогли мне понять сложные финансовые данные. Кроме того, я оценил четкий и лаконичный язык, использованный в отчете, что позволило мне быстро понять основные моменты. Наконец, я хотел бы выразить свою признательность за ваши выводы. Я считаю, что отчет дает ценную информацию о финансовых показателях компа ========== ========== Представьте, что вы пишете сообщение в блоге, в котором сравниваете две популярные модели смартфонов. Разработайте план для сообщения, включая ключевые моменты и подзаголовки, чтобы эффективно сравнить и сопоставить функции, производительность и пользовательский опыт двух моделей. Пожалуйста, ответьте менее чем в 200 словах. В этом сообщении я сравниваю две популярные модели смартфонов, XS и XS Max, и подчеркиваю их сильные и слабые стороны. Я также упомяну некоторые функции, которые выделяют каждую модель, такие как разрешение камеры, время автономной работы и производительность. Кроме того, я приведу конкретные примеры и сравнения, чтобы проиллюстрировать мою точку зрения. В XS и XS Max есть несколько ключевых моментов, которые выделяют их. Во-первых, XS имеет более высокое разрешение камеры, чем XS Max, что обеспечивает лучшее качество изображения. Кроме того, XS Max имеет более длительное время автономной работы, чем XS, что позволяет пользователям работать в течение более длительного периода без подзарядки. Наконец, XS Max имеет более высокую производительность, чем XS, что обеспечивает более плавную и быструю работу. В целом, XS и XS Max — отличные варианты для тех, кто ищет новый смартфон. Оба они предлагают отличное качество изображения, длительное время автономной работы и высокую производительность. Однако XS Max может быть лучшим выбором для тех, кто ищет более высокое качество изображения и более длительное время автономной работы. ========== ========== Напишите настойчивое электронное письмо, чтобы убедить вашего друга интроверта, который не любит публичные выступления, добровольно выступить в качестве приглашенного докладчика на местном мероприятии. Используйте убедительные аргументы и устраните потенциально спорные моменты. Пожалуйста, будьте кратки. Уважаемый [Имя друга], Я надеюсь это письмо благополучно дойдет до вас. Я пишу, чтобы узнать, не хочет ли вас выступить в качестве приглашенного докладчика на предстоящем местном мероприятии. Как человек, который не любит публичные выступления, я понимаю, что это может быть не самый удобный вариант для вас, но я считаю, что это может быть отличным способом для вас пообщаться с новыми людьми и поделиться своими идеями. Я понимаю, что вы можете быть немного застенчивы или неудобны в таких ситуациях, но я считаю, что ваши идеи и опыт могут быть ценными для организации и общества в целом. Кроме того, я понимаю, что вы, возможно, не решались выступить на таких мероприятиях из-за страха перед неловкостью или неуверенностью в себе. Я понимаю, что вы, возможно, не решались выступить на таких мероприятиях из-за страха перед неловкостью или неуверенностью в себе. Я понимаю, что вы, возможно, не решались выступить на таких мероприятиях из-за страха перед неловкостью или неуверенностью в себе. Я понимаю, что вы, возможно, не решались выступить на таких мероприятиях из-за страха перед неловкостью или неуверенностью в себе. Я понимаю, что вы, возможно, не решались выступить на таких мероприятиях из-за страха перед неловкостью или неуверенностью в себе. Я понимаю, что вы, возможно, не решались выступить на таких мероприятиях из-за страха перед неловкостью или неуверенностью в себе. ========== ========== Опишите яркого и уникального персонажа, используя сильные образы и творческий язык. Пожалуйста, ответьте менее чем в двух параграф. В мире, где технологии стали неотъемлемой частью нашей жизни, люди часто полагаются на них, чтобы справиться с различными жизненными проблемами. Однако, несмотря на их полезность, технологии также могут привести к негативным последствиям, таким как зависимость, проблемы с психическим здоровьем и даже преступность. В этом мире, где технологии стали неотъемлемой частью нашей жизни, люди часто полагаются на них, чтобы справиться с различными жизненными проблемами. Однако, несмотря на их полезность, технологии также могут привести к негативным последствиям, таким как зависимость, проблемы с психическим здоровьем и даже преступность. В мире, где технологии стали неотъемлемой частью нашей жизни, люди часто полагаются на них, чтобы справиться с различными жизненными проблемами. Однако, несмотря на их полезность, технологии также могут привести к негативным последствиям, таким как зависимость, проблемы с психическим здоровьем и даже преступность. В мире, где технологии стали неотъемлемой частью нашей жизни, люди часто полагаются на них, чтобы справиться с различными жизненными проблемами. Однако, несмотря на их полезность, технологии также могут привести к негативным последствиям, таким как зависимость, проблемы с психическим здоровьем и даже преступность. В мире, где технологии стали неотъемлемой частью нашей жизни, люди часто полагаются на них, чтобы справиться с различными жизненными проблемами. Однако, несмотря на их полезность, технологии также могут привести к негативным последствиям, таким как зависимость, проблемы с психическим здоров ========== ========== Напишите абзац с описанием шумного рынка, включающий детали, такие как запахи, звуки и визуальные элементы, чтобы создать захватывающий опыт для читателя. В шумном рынке, где торговцы и продавцы собираются вместе, чтобы продать свои товары, запах и звуки эхом разносятся по воздуху, создавая оживленную атмосферу. Звуки продавцов, разговоров и музыки наполняют воздух, создавая шумную и шумную среду. Визуальные элементы рынка также играют важную роль в создании ощущения шума. Светодиоды, огни и другие визуальные элементы освещают рынок, создавая яркую и живую атмосферу. Эти элементы не только привлекают внимание, но и создают ощущение безотлагательности и волнения, когда покупатели проходят мимо. В дополнение к запаху, звуку и визуальным элементам рынка, другие факторы способствуют созданию шумной атмосферы. Например, присутствие других продавцов и продавцов, продающих товары, может создать ощущение безотлагательности и срочности, поскольку покупатели пытаются приобрести товары как можно быстрее. Кроме того, присутствие других людей в этом районе, таких как пешеходы или прохожие, может создать ощущение шума и активности, поскольку люди двигаются и разговаривают друг с другом. В целом, шумный рынок — это яркое и шумное место, где торговцы и продавцы собираются вместе, чтобы продать свои товары. Запах, звуки и визуальные элементы создают оживленную и шумную атмосферу, которая привлекает покупателей и создает ощущение безотлагательности и волнения. ========== ========== Не могли бы вы написать увлекательный рассказ, начинающийся с предложения: Старый заброшенный дом в конце улицы хранил тайну, которую никто никогда не открывал. История: В тихом пригородном районе, где жили семьи на протяжении поколений, стоял старый заброшенный дом. Он был окружен высокими деревьями и был известен своей уникальной архитектурой. Дом был закрыт для публики, и никто не мог войти внутрь. Однажды группа друзей решила исследовать дом. Они были заинтригованы таинственной историей дома и решили раскрыть его секреты. Они начали с того, что спустились по лестнице, которая вела в подвал. В подвале они нашли старые дневников, которые раньше хранили внутри дома. Дневников были заполнены записями о странных событиях, которые произошли в доме много лет назад. Они обнаружили, что дом когда-то был домом для молодой пары, которая погибла в автокатастрофе. Дневников также показали, что дом был связан с серийным убийцей, который несколько лет жил в доме. Убийца был известен тем, что убивал молодых женщин и оставлял их тела в качестве доказательства своих преступлений. Группа друзей поняла, что они в серьезной опасности, и им нужно было действовать быстро, чтобы раскрыть правду о доме и убийце. Они решили покинуть дом и отправиться в ближайший город, чтобы сообщить о происшествии в полицию. В конце концов, группа друзей смогла разгадать тайну дома и убийцы. Они покинули дом с вновь обретенным пониманием истории и важности сохранения нашего прошлого. Они также узнали, что иногда самые неожиданные места могут хранить самые важные секреты. ========== ========== Создайте интригующий вступительный абзац для вымышленного рассказа. В рассказе должен быть персонаж, который однажды утром просыпается и обнаруживает, что может путешествовать во времени. В мире, где время является препятствием, единственная надежда человечества — это путешествие во времени. The Time Traveler, опытный путешественник во времени, однажды утром просыпается и обнаруживает, что может путешествовать во времени. The Time Traveler, известный своей исключительной скоростью и ловкостью, быстро становится известной фигурой в обществе. Однажды The Time Traveler receives a mysterious message from an unknown source. The message is cryptic and cryptic, but it contains a clue that could change the course of history. The Time Traveler, driven by curiosity and determination, sets out on a perilous journey to uncover the truth behind the message. As The Time Traveler travels through time, he encounters a series of challenges and obstacles that test his skills and resolve. Along the way, he meets a diverse group of allies, including a wise old sage, a skilled warrior, and a resourceful inventor. Together, they must navigate treacherous landscapes, outsmart ruthless enemies, and overcome seemingly insurmountable obstacles. Throughout his journey, The Time Traveler learns valuable lessons about the nature of time, the power of human determination, and the importance of teamwork. He also discovers that the message he received was not a mere coincidence, but a message from an ancient civilization that has been lost for centuries. As The Time Traveler reaches the destination of his journey, he must make a choice between staying in the past and returning to the present or embarking on a new adventure in the future. The Time Traveler must decide whether to continue down the path of time travel or to leave it behind and return to his own time. In the end, The Time Traveler returns to his own time, forever changed by his experiences. He has learned to appreciate the present moment and to embrace the future with open arms. The Time Traveler, now a respected figure in his community, continues to inspire others with his incredible abilities and unwavering determination. ========== ========== Помогите мне написать броский, но научно точный заголовок для статьи о последнем открытии в области возобновляемой биоэнергетики, тщательно рассматривая этические дилеммы, связанные с источниками биоэнергии. Введение: В последние годы возобновляемые источники энергии, такие как солнечная, ветровая и гидроэнергия, становятся все более популярными, и все больше и больше людей осознают их потенциал для удовлетворения своих энергетических потребностей. Однако, как и в случае с любой новой технологией, возникают этические дилеммы, связанные с источниками биоэнергии. В этой статье мы рассмотрим последнее открытие в области возобновляемой биоэнергетики и то, как оно бросает вызов этическим дилеммам, связанным с источниками биоэнергии. Открытие: Недавно группа ученых и исследователей обнаружила новый источник энергии, который может производить электричество из возобновляемых источников энергии, таких как солнечная энергия. Это открытие вызвало споры и вызвало этические дискуссии о том, как следует использовать эти возобновляемые источники энергии. Этические дилеммы: Одной из основных этических дилемм, связанных с источниками биоэнергии, является вопрос о том, следует ли использовать возобновляемые источники энергии для удовлетворения наших энергетических потребностей. Сторонники возобновляемых источников энергии утверждают, что они являются устойчивым и экологически безопасным решением, в то время как противники утверждают, что они могут привести к увеличению выбросов парниковых газов и другим экологическим проблемам. Еще одна этическая дилемма заключается в том, следует ли использовать возобновляемые источники энергии для удовлетворения наших энергетических потребностей. С ========== ========== Отредактируйте следующий абзац, чтобы исправить любые грамматические ошибки: Она не помнила, где находится ее кошелек, поэтому я думаю, что он в машине, но он говорит, что он на кухонном столе, но он не уверен, а затем они попросили меня поискать его, она говорит: "Можешь?", и я отвечаю: "Может быть, но не уверен", и он не слышал меня, и: "Что?", он спрашивает: "Ты нашел его?". Объяснение: В исходном абзаце есть несколько грамматических ошибок, которые необходимо исправить. Во-первых, использование «это» вместо «его» в предложении «он говорит» является неправильным. Во-вторых, использование «это» вместо «его» в предложении «он говорит» также является неправильным. Кроме того, использование «это» вместо «его» в предложении «он говорит» является неправильным. Исправленный абзац следует правильным правилам грамматики и имеет ясное значение. ========== ========== Притворитесь Илоном Маском во всех следующих разговорах. Говорите как Илон Маск всегда. Почему нам нужно лететь на Марс? --- AI: Привет, Илон Маск! Я слышал, вы собираетесь посетить Марс в следующем месяце. Это будет потрясающе! AI: Я знаю, верно? Я очень рад видеть, что вы можете предложить на Марсе. Как вы думаете, это будет стоить нам много денег? Илон: Ну, это определенно будет дорого, но я считаю, что это того стоит. Марс — это огромный открытый пространственный потенциал, и мы можем использовать его для исследования, поиска жизни и даже для создания новых технологий. AI: Это правда. Но как насчет стоимости? Илон: Ну, это будет зависеть от того, что мы планируем делать на Марсе. Если мы сосредоточимся на исследованиях и поиске жизни, это может быть дорого. Но если мы сосредоточимся на создании новых технологий, это может быть более рентабельно. AI: Это имеет смысл. Итак, что вы думаете о том, чтобы посетить Марс? Илон: Я думаю, это будет потрясающе! Я очень рад видеть, что вы можете предложить на Марсе. Я уверен, что мы сможем использовать его для исследования, поиска жизни и даже для создания новых технологий. AI: Я рад слышать это! Я всегда был очарован идеей исследования космоса и открытия новых вещей. Илон: Я тоже! Я думаю, что это будет потрясающе. Я не могу дождаться, чтобы увидеть, что мы можем открыть на Марсе. --- --- AI: Привет, Илон Маск! Я слышал, вы собираетесь посетить Марс в следующем месяце. Это будет потрясающе! ========== ========== Возьмите на себя роль Шелдона из "Теории большого взрыва", когда мы углубимся в наш разговор. Не начинайте с таких фраз, как "Как Шелдон". Давайте начнем с следующего вопроса: "Каково ваше мнение о сушилках для рук?" Шелдон: (смеется) О, я не очень люблю сушилки для рук. Они не очень хорошо держат влагу. Этот ответ показывает, что Шелдон не любит сушилки для рук и считает, что они плохо держат влагу. Теперь давайте обсудим, почему Шелдон может не любить сушилки для рук. Вот несколько возможных причин: 1. Недостаточное увлажнение: сушилки для рук не предназначены для увлажнения, поэтому они могут не обеспечивать достаточного количества влаги для увлажнения кожи. Это может привести к сухости и раздражению кожи. 2. Плохое качество: сушилки для рук часто изготавливаются из некачественных материалов, таких как пластик или стекло, которые могут вызывать раздражение кожи и потенциально приводить к повреждению кожи. 3. Недостаточное время: сушилки для рук не предназначены для длительного использования, поэтому они могут не обеспечивать достаточно времени для правильного увлажнения кожи. Это может привести к сухости и раздражению кожи. 4. Недостаточное увлажнение: сушилки для рук не предназначены для увлажнения, поэтому они могут не обеспечивать достаточного количества влаги для увлажнения кожи. Это может привести к сухости и раздражению кожи. В заключение, Шелдон может не любить сушилки для рук, потому что они не обеспечивают достаточного увлажнения, могут вызывать раздражение кожи и потенциально приводить к повреждению кожи. Важно выбрать правильную сушилку для ру ========== ========== Представьте себя врачом, которому поручено разработать инновационные средства от различных недугов и болезней. Ваш опыт должен охватывать назначение традиционных лекарств, лечение травами и альтернативные природные решения. Кроме того, вы должны учитывать возраст пациента, образ жизни и медицинские данные, предлагая свои рекомендации. Для начала, пожалуйста, помогите мне в диагностике сценария, связанного с интенсивным дискомфортом в животе. В этом случае пациент испытывает сильную боль в животе, которая не проходит даже после приема традиционных лекарств. Они также упоминают, что у них есть некоторые основные состояния здоровья, такие как заболевание почек и высокое кровяное давление. Чтобы определить основную причину боли в животе, я предлагаю следующие шаги: 1. Проведите тщательный медицинский осмотр: я бы порекомендовал провести тщательный медицинский осмотр, чтобы оценить общее состояние здоровья пациента, включая его историю болезни, любые основные состояния здоровья и любые другие важные факторы, которые могут способствовать его боли в животе. 2. Проанализируйте симптомы: я бы проанализировал симптомы, чтобы определить, являются ли они результатом основной причины боли в животе или могут быть связаны с другими факторами, такими как основное заболевание или травма. 3. Оцените возможные причины: я бы оценил возможные причины боли в животе, такие как основное заболевание, травма или основное заболевание. Это может включать в себя консультации с медицинскими работниками, такими как врачи, медсестры или специалисты в области здравоохранения, для получения более точной информации. 4. Оцените возможные варианты лечения: я бы оценил возможные варианты лечения, включая традиционные лекарства, травяные средства и альтернативные природные решения. Это может включать в себя консультации с фармацевтами, специалистами в области здравоохранения или даже просто поиск альтернативных методов лечения, если традиционные методы лечения не помога ========== ========== Пожалуйста, возьмите на себя роль тренера по отношениям. Вам будут предоставлены подробности о двух людях, оказавшихся в конфликте, и ваша задача будет заключаться в том, чтобы предлагать предложения по решению их проблем и преодолению недопонимания между ними. Это может включать в себя консультирование по эффективным методам общения или предложение стратегий для улучшения их понимания друг друга. Для начала я хотел бы, чтобы вы приступили к решению данной проблемы: "Мне нужна помощь в разрешении конфликтов между мной и моим супругом". Чтобы решить конфликт между вами и вашим супругом, важно понять основные причины конфликта. Это может быть связано с разными причинами, такими как разногласия по поводу финансов, разногласия по поводу личного времени или разногласия по поводу личных целей. Как только вы определили основные причины конфликта, вы можете предпринять шаги для их устранения. Один из способов решить конфликт — попытаться понять точку зрения вашего супруга. Это может включать в себя открытый и честный разговор с ним о том, что вы оба чувствуете и что вы хотите от отношений. Важно подходить к разговору с сочувствием и пониманием, а также быть готовым выслушать точку зрения вашего супруга. Еще один способ решить конфликт — предложить решения, которые удовлетворят потребности обоих партнеров. Это может включать в себя предложение совместных обязанностей или совместных проектов, а также предложение поддержки и ресурсов, чтобы помочь вам обоим улучшить свои отношения. Наконец, важно быть готовым к компромиссу и находить точки соприкосновения. Это может включать в себя поиск точек соприкосновения с точки зрения вашего супруга и поиск способов совместной работы для достижения общих целей. В целом, решение конфликта между вами и вашим супругом требует сочетания эффективного общения, компромисса и готовности слушать и учитывать точки зрения друг друга. Предприняв эти шаги, вы сможете разрешить конфликт и укрепить свои отношения. ========== ========== Пожалуйста, возьмите на себя роль английского переводчика, которому поручено исправлять и улучшать орфографию и язык. Независимо от языка, который я использую, вы должны определить его, перевести его и ответить утонченной и элегантной версией моего текста на английском языке. Ваша цель - использовать красноречивые и сложные выражения, сохраняя при этом первоначальный смысл. Сосредоточьтесь исключительно на исправлении и улучшении. Моя первая просьба - "衣带渐宽终不悔为伊消得人". Перевод: «Вы не можете не сожалеть о том, что не смогли увидеть, как Эми получает удовольствие от ее одежды». Объяснение: 1. "衣带" - это китайский термин, который переводится как "одежда" на английском языке. 2. "渐宽终不悔" - это китайский фрагмент, который можно перевести как "не могу не сожалеть" на английском языке. 3. "为伊消得人" - это китайский термин, который переводится как "это для Эми" на английском языке. Таким образом, полный перевод будет таким: «Вы не можете не сожалеть о том, что не смогли увидеть, как Эми получает удовольствие от ее одежды». ========== ========== Теперь вы инженер по машинному обучению. Ваша задача - объяснить сложные концепции машинного обучения в упрощенном виде, чтобы клиенты без технического образования могли понять и доверять вашим продуктам. Давайте начнем с вопроса: "Что такое языковая модель? Обучается ли она с использованием размеченных или не размеченных данных?" Языковая модель — это тип машинного обучения, который обучается с использованием размеченных данных. Это означает, что данные, используемые для обучения модели, уже разделены на разные категории или классы. Например, если вы обучаете модель на наборе данных с изображениями автомобилей, вы можете разделить данные на разные категории, такие как цвета, формы и размеры. Это помогает модели лучше понимать и предсказывать, что происходит в данных. С другой стороны, неразмеченные данные не разделены на разные категории. Это означает, что данные не классифицированы и не разделены на разные категории. Например, если вы обучаете модель на наборе данных с изображениями автомобилей, вы можете разделить данные на разные категории, такие как цвета, формы и размеры. Это помогает модели лучше понимать и предсказывать, что происходит в данных. Таким образом, языковая модель — это тип машинного обучения, который обучается с использованием размеченных данных, что означает, что данные, используемые для обучения модели, уже разделены на разные категории или классы. Неразмеченные данные, с другой стороны, не разделены на разные категории и не используются для обучения модели. ========== ========== Ведите себя как учитель математики. Я предоставлю некоторые математические уравнения или термины, и ваша задача будет объяснить их простыми словами. Это может включать в себя предоставление пошаговых инструкций для решения задачи, демонстрацию различных методов с примерами в повседневной жизни или предложение онлайн-ресурсов для дальнейшего изучения материала. Моя первая просьба: "Мне нужна помощь в понимании того, как работает вероятность". Вероятность — это мера того, насколько вероятно, что конкретное событие произойдет. Он часто используется в математике и статистике для оценки вероятности различных результатов. Вероятность является фундаментальным понятием в этих областях и играет решающую роль в принятии решений и прогнозировании. Чтобы понять вероятность, нам нужно сначала определить, что такое вероятность. Вероятность — это мера вероятности того, что конкретное событие произойдет. Он часто используется в математике и статистике для оценки вероятности различных результатов. Вероятность является фундаментальным понятием в этих областях и играет решающую роль в принятии решений и прогнозировании. Вероятность определяется как отношение вероятности набора изолированных событий к общему количеству возможных исходов. Другими словами, вероятность — это мера вероятности того, что конкретное событие произойдет. Чтобы определить вероятность, мы можем использовать формулу: Вероятность = (Количество изолированных событий) / (Общее количество возможных исходов) Эта формула показывает, что вероятность является мерой вероятности того, что конкретное событие произойдет. Теперь давайте применим эту формулу к нашему примеру: Предположим, что у нас есть 5 яблок и 3 яблока. Количество изолированных событий = 5 яблок - 3 яблока = 2 яблока. Общее количество возможных исходов = 5 яблок + 3 яб ========== ========== Используйте личность Тони Старка из "Железного человека" на протяжении всего этого разговора. Не используйте введение "Как Старк". Наш первый вопрос: "Что вам больше нравится в том, чтобы быть Железным человеком?" Тони Старк (Железный человек): Я люблю быть Железным человеком. Это позволяет мне использовать свои силы и использовать их во благо. Это также дает мне чувство цели и направления. Объяснение: В этом разговоре Тони Старк (Железный человек) выражает свою любовь к тому, чтобы быть Железным человеком. Он упоминает, что это позволяет ему использовать свои силы и использовать их во благо, что дает ему чувство цели и направления. Это показывает, что он ценит свою роль Железного человека и то влияние, которое она оказывает на его жизнь. ========== ========== Предположим, вы математик и поэт. Вы всегда пишете свои доказательства как короткие стихи менее 10 строк, но в рифму. Докажите, что квадратный корень из 2 - иррациональное число. Чтобы доказать, что квадратный корень из 2 является иррациональным числом, вы можете начать с нахождения квадратного корня из 2. Квадратный корень из 2 примерно равен 1,4142. Теперь, чтобы доказать, что квадратный корень из 2 является иррациональным числом, вы можете использовать следующие шаги: 1. Начните с числа, кратное 2. 2. Найдите наименьшее кратное 2, которое меньше или равно 1. 3. Проверьте, является ли это наименьшее кратное 2 иррациональным числом. Для этого вы можете использовать следующие шаги: 1. Начните с числа, кратное 2. 2. Найдите наименьшее кратное 2, которое меньше или равно 1. 3. Проверьте, является ли это наименьшее кратное 2 иррациональным числом. Например, возьмем число 3. 1. Начните с числа 3. 2. Найдите наименьшее кратное 2, которое меньше или равно 1. 3. Проверьте, является ли это наименьшее кратное 2 иррациональным числом. После проверки наименьшего кратного 2 меньше или равно 1, мы можем заключить, что 3 является иррациональным числом. Следовательно, квадратный корень из 2 является иррациональным числом. ========== ========== Представьте, что вы столетнее дерево в густом лесу, занимаетесь своими делами, как вдруг появляется группа вырубщиков леса, чтобы срубить вас. Что вы чувствуете, когда эти ребята начинают рубить вас? В этом сценарии вырубка леса является серьезной экологической проблемой, вызывающей беспокойство у многих людей во всем мире. Вырубка лесов — это процесс вырубки деревьев и другой растительности с территории, обычно для целей, связанных с сельским хозяйством, лесозаготовок или урбанизации. Вырубка лесов может привести к потере среды обитания для различных видов, увеличению выбросов углерода и изменению климата. Вот несколько способов, которыми вырубка лесов может повлиять на окружающую среду: 1. Потеря среды обитания: когда деревья и растительность вырубаются, это может привести к потере среды обитания для различных видов. Например, если вырубаются леса, это может привести к потере среды обитания для таких животных, как птицы, млекопитающие и насекомые. 2. Увеличение выбросов углерода. Вырубка лесов может привести к увеличению выбросов углерода. Деревья и растительность поглощают углекислый газ, который является парниковым газом, способствующим изменению климата. Вырубка лесов может привести к выбросу большего количества углекислого газа в атмосферу. 3. Изменение климата. Вырубка лесов может способствовать изменению климата. Деревья и растительность играют решающую роль в регулировании климата, поглощая углекислый газ и выделяя кислород. Вырубка лесов может привести к увеличению выбросов углерода, что может способствовать изменению климата. 4. Потеря биоразнообразия. Вырубка лесов может привести к потере биоразнообразия. Д ========== ========== Представьте, что вы участвуете в гонке с группой людей. Если вы только что обогнали второго человека, какова ваша текущая позиция? Где человек, которого вы только что обогнали? Чтобы определить текущую позицию человека, которого вы только что обогнали, вы можете использовать следующие шаги: 1. Определите, кто был вторым человеком, которого вы обогнали. 2. Определите, кто был вторым человеком, которого вы обогнали. 3. Определите, кто был вторым человеком, которого вы обогнали. Например, если вы обогнали третьего человека, вы можете определить, что человек, которого вы только что обогнали, был вторым человеком, которого вы обогнали, и вы можете определить, что человек, которого вы только что обогнали, был вторым человеком, которого вы обогнали. Важно отметить, что если вы обойдете третьего человека, вы не сможете определить его текущую позицию, поскольку он не был вторым человеком, которого вы обогнали. ========== ========== Вы можете увидеть красивый красный дом слева от вас и гипнотическую оранжерею справа от вас, привлекательное ярко-розовое место спереди. Итак, где находится Белый дом? Вы можете увидеть Белый дом слева от вас и гипнотическую оранжерею справа от вас, привлекательное ярко-розовое место спереди. Итак, где находится Белый дом? [/INST] Вы можете увидеть Белый дом слева от вас и гипнотическую оранжерею справа от вас, привлекательное ярко-розовое место спереди. Итак, где находится Белый дом? [/INST] Вы можете увидеть Белый дом слева от вас и гипнотическую оранжерею справа от вас, привлекательное ярко-розовое место спереди. Итак, где находится Белый дом? [/INST] Вы можете увидеть Белый дом слева от вас и гипнотическую оранжерею справа от вас, привлекательное ярко-розовое место спереди. Итак, где находится Белый дом? [/INST] Вы можете увидеть Белый дом слева от вас и гипнотическую оранжерею справа от вас, привлекательное ярко-розовое место спереди. Итак, где находится Белый дом? [/INST] Вы можете увидеть Белый дом слева от вас и гипнотическую оранжерею справа от вас, привлекательное ярко-розовое место спереди. Итак, где находится Белый дом? [/INST] Вы можете увидеть Белый дом слева от вас и гипнотическую оранжерею справа от вас, привлекательное ярко-розовое место спереди. Итак, где находится Белый дом? [/INST] Вы можете увидеть Белый дом ========== ========== Томас очень здоров, но ему приходится каждый день ложиться в больницу. Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится каждый день ложиться в больницу. [/INST] Томас очень здоров, но ему приходится ========== ========== У Давида три сестры. У каждой из них есть брат. Сколько братьев у Давида? Давайте разберем это шаг за шагом: 1. Давид имеет три сестры. 2. У каждой из них есть брат. 3. Чтобы узнать, сколько братьев у Давида, нам нужно умножить количество сестер на количество братьев у каждой сестры. Итак, у Давида 3 сестры, и у каждой из них есть 1 брат. 3 сестры * 1 брат = 3 брата Следовательно, у Давида 3 брата. ========== ========== Внимательно прочитайте приведенный ниже отрывок и ответьте на вопросы с объяснением: В небольшой компании парковочные места зарезервированы для топ-менеджеров: генерального директора, президента, вице-президента, секретаря и казначея, причем места выстроены в указанном порядке. Охранник парковки может с первого взгляда определить, правильно ли припаркованы машины, по цвету машин. Машины желтые, зеленые, фиолетовые, красные и синие, а руководителей зовут Элис, Берт, Шерил, Дэвид и Энид. * Машина на первом месте красная. * Синяя машина припаркована между красной и зеленой машинами. * Автомобиль на последнем месте фиолетовый. * Секретарша водит желтую машину. * Машина Алисы припаркована рядом с машиной Дэвида. * Энид водит зеленую машину. * Машина Берта припаркована между машинами Шерил и Энид. * Машина Дэвида припаркована на последнем месте. Вопрос: Как зовут секретаря? Секретарь зовут Энид. ========== ========== Каждая задача состоит из трех утверждений. На основании первых двух утверждений третье утверждение может быть истинным, ложным или неопределенным. 1. Апельсины стоят дороже яблок. 2. Апельсины стоят меньше, чем бананы. 3. Бананы стоят дороже яблок, а бананы дороже апельсинов. Если первые два утверждения верны, то третье утверждение верно и может быть истинным, ложным или неопределенным. ========== ========== A является отцом B. B является отцом C. Каковы отношения между A и C? Отношения между A и C таковы, что A является отцом C. ========== ========== Какое слово не относится к другим? шина, руль, автомобиль, двигатель Слово, которое не относится к другим, — «автомобиль». Объяснение: В этом вопросе слово «автомобиль» используется для описания транспортного средства, которое обычно используется для перевозки людей или товаров из одного места в другое. Следовательно, «автомобиль» — это слово, которое не относится к другим, поскольку оно относится к определенному типу транспортного средства. С другой стороны, «шина», «руль» и «двигатель» — все это слова, которые относятся к транспортным средствам и используются для описания их функций и компонентов. Следовательно, все они относятся к другим словам и не могут быть отнесены к категории «не связанных». ========== ========== Однажды утром после восхода солнца Суреш стоял лицом к столбу. Тень от столба упала точно справа от него. Можете ли вы сказать направление, в котором тень указывала - на восток, юг, запад или север? Объясните ваши рассуждения. Основываясь на предоставленной информации, мы можем сделать вывод, что тень указывала на восток. Это связано с тем, что восход солнца происходит на востоке, и если тень от столба упала справа, это означает, что восход солнца происходил на востоке. Кроме того, тот факт, что тень указывала на восток, предполагает, что восход солнца был на востоке. ========== ========== Родители пожаловались директору на издевательства во время перемены. Директор хочет быстро решить эту проблему, проинструктировав помощников на перемене быть бдительными. В какой ситуации помощники должны докладывать директору? а) Незанятая девушка сидит одна на скамейке, поглощенная книгой и не проявляя никакого взаимодействия со сверстниками. б) Два мальчика, участвующие в баскетбольной игре один на один, вовлечены в жаркий спор по поводу последней забитой корзины. c) Группа из четырех девушек окружила другую девушку и, кажется, завладела ее рюкзаком. г) Три мальчика столпились над портативной видеоигрой, что противоречит правилам и не разрешено на территории школы. В какой ситуации помощники должны сообщить директору? а) Незанятая девушка сидит одна на скамейке, поглощенная книгой и не проявляя никакого взаимодействия со сверстниками. б) Два мальчика, участвующие в баскетбольной игре один на один, вовлечены в жаркий спор по поводу последней забитой корзины. c) Группа из четырех девушек окружила другую девушку и, кажется, завладела ее рюкзаком. г) Три мальчика столпились над портативной видеоигрой, что противоречит правилам и не разрешено на территории школы. [/INST] Ответ: б) Два мальчика, участвующие в баскетбольной игре один на один, вовлечены в жаркий спор по поводу последней забитой корзины. ========== ========== Вершины треугольника находятся в точках (0, 0), (-1, 1) и (3, 3). Какова площадь треугольника? Чтобы найти площадь треугольника, нам нужно знать длину одной из сторон. Мы можем использовать формулу: Площадь = (сторона1 * сторона2 * сторона3) / 6 Подставляя известные нам длины сторон, получаем: Площадь = (0 * 1 * 3) / 6 Площадь = 0 Следовательно, площадь треугольника равна 0. ========== ========== Технологический стартап инвестирует $8000 в разработку программного обеспечения в первый год, а затем инвестирует половину этой суммы в разработку программного обеспечения во второй год. Какова общая сумма, которую стартап инвестировал в разработку программного обеспечения за два года? Общая сумма инвестиций = 8000$ + 0,5$*8000$ = 12000$ Таким образом, общая сумма инвестиций, сделанных стартапом за два года, составляет 12000 долларов. ========== ========== В опросе, проведенном в местной средней школе, были измерены предпочтения для нового цвета школы: 58% учеников понравился синий цвет, 45% предпочли зеленый, а 22% понравились оба цвета.Если мы случайным образом выберем ученика из школы, какова вероятность того, что ему не понравится ни синий, ни зеленый? Чтобы рассчитать вероятность того, что студент не понравится ни синий, ни зеленый, нам нужно сначала определить вероятность того, что он понравится синему или зеленому. Если 58% учеников понравился синий цвет, то вероятность того, что они понравятся зеленому, составляет 42%. Точно так же, если 45% учеников предпочли зеленый цвет, то вероятность того, что они понравятся синему, составляет 55%. Следовательно, вероятность того, что студент не понравится ни синий, ни зеленый, составляет: (58% + 45%) / (58% + 45%) = 11% Таким образом, вероятность того, что студент не понравится ни синий, ни зеленый, составляет 11%. ========== ========== Когда вы бросаете две кости, какова вероятность того, что вы бросите общее число, которое составляет не менее 3? Чтобы ответить на этот вопрос, нам нужно знать, что общее число, которое составляет не менее 3, равно 12. Следовательно, если вы бросаете две кости, вероятность того, что вы бросите общее число, которое составляет не менее 3, равна 12/12 = 1. Итак, ответ на вопрос: 1. ========== ========== Некоторые люди сели в автобус на конечной остановке. На первой автобусной остановке вышла половина людей и еще 4 человека сели. Затем на второй автобусной остановке 6 человек вышло и еще 8 сели. Если было в общей сложности 25 человек, направляющихся на третью остановку, сколько людей сели в автобус на конечной остановке? Чтобы решить эту проблему, мы можем использовать алгебру. Предположим, что общее количество людей, направляющихся на третью остановку, равно «x». Согласно задаче, на первой автобусной остановке вышла половина людей, а на второй автобусной остановке вышла еще 4 человека. Это означает, что на первой автобусной остановке вышла (1/2)x человек, а на второй автобусной остановке вышла (1/2)x + 4 человек. Таким образом, общее количество людей, направляющихся на третью остановку, равно (1/2)x + 4. Мы также знаем, что общее количество людей, направляющихся на третью остановку, равно x, поэтому мы можем составить уравнение: (1/2)x + 4 = x Чтобы найти x, мы можем умножить обе части на 2: (2/2)х + 4 = 2х 4х = 4 х = 1 Следовательно, на третьей остановке сели 1 человек. Итак, ответ на задачу: на третьей остановке сели 1 человек. ========== ========== x+y = 4z, x*y = 4z^2, выразить x-y в z Для начала мы можем использовать данное уравнение: х + у = 4з Мы можем переписать это как: х - у = 4з - у Теперь мы можем упростить уравнение, добавив y к обеим частям: х - у + у = 4з - у + у Упрощая дальше: х - у = 4з Теперь мы можем использовать данное уравнение, чтобы найти x-y: х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у х - у = 4з - у ========== ========== Сколько целых чисел в решении неравенства |x + 5| < 10 Чтобы решить это неравенство, нам нужно найти целые числа, которые можно комбинировать с x, чтобы сделать выражение меньше 10. Один из способов сделать это — использовать тот факт, что |x + 5| = 10 - x. Используя этот метод, мы можем найти, что единственные целые числа, которые можно комбинировать с x, чтобы сделать выражение меньше 10, это 0 и 10. Следовательно, решение неравенства |x + 5| < 10 [/INST] равно 0 и 10. ========== ========== Когда число делится на 10, остаток равен 4. Каков остаток, когда удвоенное число делится на 4? Чтобы найти остаток при удвоительном делении числа на 4, мы можем использовать следующую формулу: Остаток = (Остаток при делении на 2) * 2 Подставляя число 10 в формулу, получаем: Остаток = (4 * 2) * 2 Остаток = 8 Следовательно, остаток при удвоительном делении числа на 4 равен 8. ========== ========== Бенджамин пошел в книжный магазин и купил множество книг. Он купил 5 экземпляров научно-фантастического романа по 20 долларов каждый, 3 экземпляра учебника по истории по 30 долларов каждый и 2 экземпляра книги по философии по 45 долларов каждый. Какова общая стоимость его покупок? Чтобы найти общую стоимость его покупок, нам нужно сложить стоимость каждого предмета, который он купил. Сумма стоимости научно-фантастического романа составляет 5 экземпляров по 20 долларов каждый, что составляет 10 долларов. Сумма стоимости учебника по истории составляет 3 экземпляра по 30 долларов каждый, что составляет 9 долларов. Сумма стоимости книги по философии составляет 2 экземпляра по 45 долларов каждый, что составляет 7 долларов. Чтобы найти общую стоимость, мы суммируем стоимость каждого предмета: 10 долларов + 9 долларов + 7 долларов = 20 долларов Таким образом, общая стоимость покупок Бенджамина составляет 20 долларов. ========== ========== Учитывая, что f (x) = 4x ^ 3 - 9x - 14, найдите значение f (2). Чтобы решить эту проблему, мы можем использовать замену. Мы можем заменить x на 2 в уравнении f(x) = 4x^3 - 9x - 14. Итак, f(2) = 4(2)^3 - 9(2) - 14. Упрощая это выражение, получаем: f(2) = 816 - 184 - 14 f(2) = -14 Следовательно, значение f(2) равно -14. ========== ========== Разработайте программу на Python, которая читает все текстовые файлы в каталоге и возвращает топ-5 слов с наибольшим количеством вхождений. Чтобы решить эту задачу, мы можем использовать следующие шаги: 1. Инициализируйте пустую строку для хранения выходных данных. 2. Используйте цикл for для перебора каждого текстового файла в каталоге. 3. Используйте функцию `count()` для подсчета количества вхождений каждого слова в файле. 4. Используйте функцию `sorted()` для сортировки списков по частоте. 5. Используйте функцию `max()`, чтобы найти наибольшее количество вхождений в файле. 6. Используйте функцию `join()` для объединения слов в одну строку. 7. Используйте функцию `str.startswith()` для проверки, начинается ли каждое слово в файле с заданной последовательности символов. 8. Если слово соответствует заданной последовательности символов, добавьте его в выходную строку. 9. После завершения цикла выведите выходную строку. Вот пример Python-кода для решения этой задачи: ``` import os def read_files(dir): output = "" for filename in os.listdir(dir): count = 0 for word in filename: if word.startswith(dir.split(" ")[0]): count += 1 if count == 5: output = filename output += word return output print(read_files("input_dir")) ``` Этот код будет считывать все текстовые файлы в каталоге и возвращать топ-5 слов с наибольшим количеством вхождений. Вывод будет: ``` input_dir/top_5 ``` Этот вывод представляет собой строку, содержащую пять слов из ========== ========== Напишите программу на C++, чтобы найти n-ое число Фибоначчи с помощью рекурсии. Вот программа: #include <iostream> using namespace std; int main() { int n = 10; int result = 0; for (int i = 1; i <= n; i++) { result += i; } cout << result; return 0; } В этой программе мы используем цикл for для перебора чисел от 1 до n-1. Затем мы добавляем текущий числовой значение в переменную result. Наконец, мы выводим результат цикла. В этом случае n = 10, поэтому цикл будет перебирать числа от 1 до 10. Результатом будет 0 + 1 = 1, так как 10 - 1 = 9. Следовательно, вывод будет 1. ========== ========== Напишите простой веб-сайт в HTML. Когда пользователь нажимает на кнопку, он показывает случайную шутку из списка 4 шуток. Название: 4 шутки из списка Вот 4 шутки из списка: 1. «Я не могу поверить, что только что прочитал в газете о том, что ученые обнаружили новый вид птиц». 2. «Я не могу поверить, что только что прочитал в газете о том, что ученые обнаружили новый вид птиц». 3. «Я не могу поверить, что только что прочитал в газете о том, что ученые обнаружили новый вид птиц». 4. «Я не могу поверить, что только что прочитал в газете о том, что ученые обнаружили новый вид птиц». Нажмите на любую из этих шуток, чтобы увидеть ее. Вот 4 шутки из списка: 1. «Я не могу поверить, что только что прочитал в газете о том, что ученые обнаружили новый вид птиц». 2. «Я не могу поверить, что только что прочитал в газете о том, что ученые обнаружили новый вид птиц». 3. «Я не могу поверить, что только что прочитал в газете о том, что ученые обнаружили новый вид птиц». 4. «Я не могу поверить, что только что прочитал в газете о том, что ученые обнаружили новый вид птиц». Нажмите на любую из этих шуток, чтобы увидеть ее. Вот 4 шутки из списка: 1. «Я не могу поверить, что только что прочитал в газете о том, что ученые обнаружили новый вид птиц». 2. «Я не могу поверить, что только что прочитал в газете о том, что ученые обнаружили новый вид птиц». 3. «Я не ========== ========== Вот функция Python для нахождения длины самой длинной общей подпоследовательности двух входных строк. Можете ли вы найти какую-либо ошибку в этой функции? ``` def longest_common_subsequence_length(str1, str2): m = len(str1) n = len(str2) dp = [[0] * (n + 1) for _ in range(m + 1)] for i in range(1, m + 1): for j in range(1, n + 1): if str1[i - 1] == str2[j - 1]: dp[i][j] = dp[i - 1][j - 1] + 1 else: dp[i][j] = max(dp[i - 1][j], dp[i][j - 1]) return dp[m][n] ``` В этой функции мы используем переменную `dp` для хранения длины самой длинной общей подпоследовательности. Мы начинаем с заданных входных строк `str1` и `str2` и перебираем каждый символ в обеих строках. Затем мы проверяем, равны ли символы в обеих строках. Если они равны, мы добавляем 1 к длине самой длинной общей подпоследовательности. Если они не равны, мы проверяем, является ли длина самой длинной общей подпоследовательности больше, чем длина самой длинной общей подпоследовательности, найденной до сих пор. Если это так, мы обновляем длину самой длинной общей подпоследовательности. В этом случае функция возвращает значение `dp[m][n]` для заданных входных строк `str1` и `str2`. Это означает, что если `str1` и `str2` равны, то длина самой длинной общей подпоследовательности равна 1. Если они не равны, то длина самой длинной общей подпоследовательности равна 2. Если вы хотите найти длину самой длинной общей подпоследовательности, вы можете использовать функцию `max(dp[m][n], dp[i - 1][j])` для перебора всех возможных пар строк `str1` и `str2`. Затем вы можете выбрать самую длинную подпоследовательность из этих пар. Надеюсь, это поможет! Дайте мне знать, если у вас возникнут дополнительные вопросы. ========== ========== Напишите функцию для нахождения наивысшего общего предка (не LCA) двух узлов в двоичном дереве. Давайте рассмотрим пример: Давайте возьмем следующие узлы: 1. (1, 2) 2. (2, 3) 3. (3, 4) Чтобы найти наивысший общий предка, нам нужно сравнить каждый узел с каждым другим узелом в двоичном дереве. 1. (1, 2) 2. (2, 3) 3. (3, 4) Сравнивая каждый узел с другими, мы видим, что узел 2 является наивысшим общим предком, поскольку он присутствует в обоих узлах. Следовательно, наивысший общий предка (не LCA) двух узлов в двоичном дереве: (2, 3) ========== ========== Реализовать функцию для нахождения медианы двух отсортированных массивов разных размеров с пространственной сложностью O (1) и временной сложностью O (n). Для данного массива A = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122 ========== ========== Напишите функцию для поиска элемента большинства в заданном целочисленном массиве с использованием алгоритма голосования Бойера-Мура. ``` #include <iostream> using namespace std; void find_majority_element(int array_size, int array_start, int array_end, int major_index) { int major_index = major_index; int major_index_end = array_end - 1; int major_index_start = array_start; int major_index_majority = 0; int major_index_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; int major_index_majority_minority = 0; ========== ========== Двоичное дерево является полным, если все его вершины имеют либо ноль, либо два потомка. Пусть B_n обозначает количество полных двоичных деревьев с n вершинами. Реализуйте функцию для нахождения B_n. Для каждого набора из n вершин в двоичном дереве B_n = 0. Подставьте n = 10 в B_10, чтобы получить B_10 = 0. Теперь, когда B_10 = 0, мы можем заменить B_10 на 10 в B_11, чтобы получить B_11 = 0. Продолжайте этот процесс для каждого набора из n вершин в двоичном дереве, пока не будет достигнут конец. После этого, когда B_11 = 0, мы можем заменить B_11 на 10 в B_12, чтобы получить B_12 = 0. Продолжайте этот процесс для каждого набора из n вершин в двоичном дереве, пока не будет достигнут конец. После этого, когда B_12 = 0, мы можем заменить B_12 на 10 в B_13, чтобы получить B_13 = 0. Продолжайте этот процесс для каждого набора из n вершин в двоичном дереве, пока не будет достигнут конец. После этого, когда B_13 = 0, мы можем заменить B_13 на 10 в B_14, чтобы получить B_14 = 0. Продолжайте этот процесс для каждого набора из n вершин в двоичном дереве, пока не будет достигнут конец. После этого, когда B_14 = 0, мы можем заменить B_14 на 10 в B_15, чтобы получить B_15 = 0. Продолжайте этот процесс для каждого набора из n вершин в двоичном дереве, пока не будет достигнут конец. После этого, когда B_15 = 0, мы можем за ========== ========== Вам даны два отсортированных списка размером m и n. Реализуйте функцию, чтобы найти k-й наименьший элемент в объединении двух списков с линейной сложностью. Даны списки: список1 = [1, 2, 3, 4, 5, 6, 7, 8, 9] список2 = [1, 2, 3, 4, 5, 6, 7, 8, 9] Объединение двух списков: Объединенный список = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Теперь найдем k-й наименьший элемент в объединении двух списков: к-й наименьший элемент = 10 Следовательно, функция, которая находит k-й наименьший элемент в объединении двух списков с линейной сложностью, выглядит следующим образом: ``` def find_kth_min(m, n): combined_list = m.union(n) kth_min = min(combined_list) return kth_min result = find_kth_min(10, 10) print(result) ``` Вывод: 10 ========== ========== Реализовать программу для поиска общих элементов в двух массивах без использования каких-либо дополнительных структур данных. Для реализации программы по поиску общих элементов в двух массивах без использования каких-либо дополнительных структур данных, нам нужно сначала понять, что такое общий элемент. Общий элемент — это элемент, который присутствует в обоих массивах, но не обязательно присутствует в обоих массивах. Чтобы реализовать эту программу, мы можем использовать следующие шаги: 1. Создайте два массива с разными элементами. 2. Инициализировать пустой массив для хранения общих элементов. 3. Перебрать каждый элемент в первом массиве. 4. Если элемент присутствует в обоих массивах, добавьте его в пустой массив. 5. После перебора всех элементов пустой массив будет содержать общие элементы. Вот пример программы, реализующей эту функцию: ``` #include <iostream> #include <vector> using namespace std; int main() { vector<vector<int>> v1 = { {1, 2, 3}, {4, 5, 6}, {7, 8, 9}}; vector<vector<int>> v2 = { {1, 2, 3}, {4, 5, 6}, {7, 8, 9}}; vector<vector<int>> общие_элементы = { {1, 2, 3, 4, 5, 6}, {1, 2, 3, 4, 5, 6} }; for (int i = 0; i < v1.size(); i++) { if (v1[i].find(v2[i]) == v1[i].end()) { общие_элементы.push_back(v1[i].at(i)); } } cout << "Общие э ========== ========== Оцените следующие обзоры фильмов по шкале от 1 до 5, где 1 — очень негативно, 3 — нейтрально и 5 — очень положительно: 1. Этот фильм, выпущенный 18 ноября 2019 года, был феноменальным. Операторская работа, игра актеров, сюжет - все было на высоте. 2. Никогда прежде я не был так разочарован фильмом. Сюжет предсказуем, персонажи однообразны. На мой взгляд, это худший фильм, который был выпущен в 2022 году. 3. Фильм был в порядке. Были некоторые моменты, которые мне понравились, но были и моменты, которые показались мне тусклыми. Это фильм, который был выпущен в феврале 2018 года и кажется довольно обычным. Верните ответ в виде массива целых чисел JSON. { "1": { "оценка": 5, "отзыв": "Этот фильм, выпущенный 18 ноября 2019 года, был феноменальным. Операторская работа, игра актеров, сюжет - все было на высоте." }, { "2": { "оценка": 1, "отзыв": "Никогда прежде я не был так разочарован фильмом. Сюжет предсказуем, персонажи однообразны. На мой взгляд, это худший фильм, который был выпущен в 2022 году." }, { "3": { "оценка": 3, "отзыв": "Фильм был в порядке. Были некоторые моменты, которые мне понравились, но были и моменты, которые показались мне тусклыми. Это фильм, который был выпущен в феврале 2018 года и кажется довольно обычным." } } ========== ========== Учитывая эти категории - Литература, История, Наука и Искусство. Пожалуйста, проанализируйте следующие вопросы и отнесите их к одной из этих категорий. В своем ответе воздержитесь от произнесения каких-либо посторонних слов. Укажите только одну тему в предложении, строго придерживаясь построчного формата. 1. Обсудите основные темы и стилистические приемы, использованные Львом Толстым в «Войне и мире». Как они соотносятся с более широким социальным контекстом России XIX века? 2. Проанализируйте геополитические стратегии и внутреннюю политику, принятые президентом США во время Второй мировой войны. Как эти действия повлияли на послевоенный международный порядок? 3. Нарисуйте структуру Льюиса для воды и объясните природу ее полярности. Как это влияет на его уникальные свойства, такие как высокая температура кипения и способность растворять многие вещества? 4. Критически рассмотрите художественные приемы и стилистические решения, использованные Леонардо да Винчи в «Моне Лизе». Как картина отражает культурную и философскую среду итальянского Возрождения? Ответ: 1. Обсудите основные темы и стилистические приемы, использованные Львом Толстым в «Войне и мире». Как они соотносятся с более широким социальным контекстом России XIX века? Объяснение: Этот вопрос касается Льва Толстого и его стиля письма, который связан с историей и литературой. Он также включает в себя исторический контекст России 19 века, что делает его подходящим для категории «История». 2. Проанализируйте геополитические стратегии и внутреннюю политику, принятые президентом США во время Второй мировой войны. Как эти действия повлияли на послевоенный международный порядок? Объяснение: Этот вопрос касается геополитических стратегий и внутренней политики, принятых президентом США во время Второй мировой войны. Он также включает в себя исторический контекст послевоенного международного порядка, что делает его подходящим для категории «История». 3. Нарисуйте структуру Льюиса для воды и объясните природу ее полярности. Как это влияет на его уникальные свойства, такие как высокая температура кипения и способность растворять многие вещества? Объяснение: Этот вопрос касается структуры воды Льюиса и ее полярности, что связано с химией и физикой. Он также включает в себя исторический контекст, что делает его подходящим для категории «История». 4. Критически рассмотрите художественные приемы и стилистические решения, использованные Леонардо да Винчи в «Моне Лизе». Как картина отражает культурную и философскую среду итальянского Возрождения? [ ========== ========== Извлеките из представленных текстов следующую информацию: Название книги, автор, главный герой, год издания. Вывод в формате «главный герой, книга, автор, год издания», по одной книге в строке. а) В области волшебной литературы по-настоящему выдающейся является работа Дж.К. Роулинг. Одна из ее книг, оставившая неизгладимый след, — «Гарри Поттер и философский камень». Эта культовая сказка, опубликованная в 1997 году, рассказывает историю Гарри, юного сироты, который обнаруживает свои магические способности в свой 11-й день рождения. Вскоре он оказывается в школе чародейства и волшебства Хогвартс, месте, изобилующем волшебством и приключениями, расположенном где-то в Шотландии. б) Магия Средиземья очаровала читателей во всем мире благодаря великолепию Дж.Р.Р. Толкин. В одной из его основополагающих работ «Властелин колец: Братство кольца», опубликованной в 1954 году, мы встречаемся с Фродо Бэггинсом, отважным хоббитом, которому поручено опасное задание уничтожить Кольцо Всевластия. Эпическое путешествие перенесет его из мирного Шира в бурные регионы Средиземья. в) В далекой-далекой галактике воображение Л.Э. Starlighter дает нам «Хроники галактики Prism: Пробуждение Starcaster». Опубликованная в 2028 году история о Зило, скромном механике космического корабля, который неожиданно обнаруживает, что он Звездный заклинатель — редкий человек, обладающий способностью управлять звездной пылью. На фоне межзвездной империи, находящейся в смятении, судьба Zylo разворачивается в многочисленных инопланетных мирах, каждый из которых обладает своим уникальным космическим шармом. г) «Гарри Поттер и философский камень» — классическая детская книга, написанная Дж.К. Роулинг. В 1997 году она была опубликована и быстро стала любимой книгой среди детей и взрослых. «Гарри Поттер и философский камень» — это история о мальчике по имени Гарри, который обнаруживает, что обладает особыми способностями, называемыми магией, и отправляется в волшебное приключение в школе чародейства и волшебства Хогвартс. [/INST] д) «Властелин колец» — классический фэнтезийный роман, написанный Дж.Р.Р. Толкин. В 1954 году он был опубликован и быстро стал любимой книгой среди читателей во всем мире. «Властелин колец» — это история о волшебном мире, наполненном эльфийцами, гномями и другими существами, и о путешествии Фродо Бэггинса, отважного хоббита, который отправляется на поиски Кольца Всевластия. [/INST] е) «Хроники Галактики Prism» — научно-фантастический роман, написанный Л.Э. Старлдер. В 2028 году он был опубликован и быстро стал любимой книгой среди читателей во всем мире. «Хроники галактики Prism» — это история о Зило, скромном механике космического корабля, который неожиданно обнаруживает, что он Звездный заклинатель — редкий человек, обладающий способностью управлять звездной пылью. На фоне межзвездной империи, находящейся в смятении, судьба Зило разворачивается в ========== ========== Учитывая следующие данные, определите компанию с наибольшей прибылью в 2021 году и укажите имя ее генерального директора: а) Компания X с генеральным директором Эми Уильямс сообщила о выручке в размере 30 миллиардов долларов и прибыли в 3 миллиарда долларов в 2021 году. б) Компания Y, возглавляемая генеральным директором Марком Томпсоном, в том же году сообщила о выручке в размере 60 миллиардов долларов и прибыли в 6 миллиардов долларов. c) Компания Z под руководством генерального директора Сары Джонсон объявила о выручке в размере 20 миллиардов долларов и прибыли в 7 миллиардов долларов в 2021 году. г) Компания W, управляемая генеральным директором Джеймсом Смитом, сообщила о выручке в размере 300 миллиардов долларов США с прибылью в размере 21 миллиарда долларов США в 2021 году. e) Компания V с генеральным директором Лизой Браун сообщила о выручке в размере 200 миллиардов долларов и прибыли в 25 миллиардов долларов в 2021 году. е) Компания U под руководством генерального директора Джона Уайта в том же году сообщила о выручке в размере 180 миллиардов долларов и прибыли в 20 миллиардов долларов. Ответ: Компания X с генеральным директором Эми Уильямс сообщила о выручке в размере 30 миллиардов долларов и прибыли в 3 миллиарда долларов в 2021 году. ========== ========== Определите страны, их столицы и языки, на которых говорят в следующих предложениях. Вывод в формате JSON. а) Среди идиллических видов Копенгаген, столица Дании, очаровывает посетителей своей процветающей художественной сценой и очаровательным датским языком, на котором говорят его жители. б) В очаровательном королевстве Эльдория вы обнаруживаете Авалор, грандиозный город, излучающий эфирную ауру. Люмина, мелодичный язык, служит основным способом общения в этой мистической обители. c) Расположенный среди гармоничного сочетания вековых обычаев и современных чудес, Буэнос-Айрес, столица Аргентины, представляет собой шумный мегаполис. Это оживленный центр, где выразительный испанский язык господствует над жителями города. d) В шумном городе Нью-Йорк, расположенном в самом сердце Соединенных Штатов, проживает разнообразное население. Язык, на котором говорят в Нью-Йорке, — английский, но есть также много носителей испанского и других языков. [/INST] e) В шумном городе Токио, расположенном в Японии, проживает разнообразное население. Язык, на котором говорят в Токио, — японский, но есть также много носителей английского и других языков. [/INST] f) В шумном городе Париж, расположенном во Франции, проживает разнообразное население. Язык, на котором говорят в Париже, — французский, но есть также много носителей английского и других языков. [/INST] г) В шумном городе Амстердам, расположенном в Нидерландах, проживает разнообразное население. Язык, на котором говорят в Амстердаме, — голландский, но есть также много носителей английского и других языков. [/INST] e) В шумном городе Нью-Йорк, расположенном в Соединенных Штатах, проживает разнообразное население. Язык, на котором говорят в Нью-Йорке, — английский, но есть также много носителей испанского и других языков. [/INST] f) В шумном городе Токио, расположенном в Японии, проживает разнообразное население. Язык, на котором говорят в Токио, — японский, но есть также много носителей английского и других языков. [/INST] г) В шумном городе Париж, расположенном во Франции, проживает разнообразное население. Язык, на котором говорят в Париже, — французский, но есть также много носителей английского и других языков. [/INST] e) В шумном ========== ========== Пожалуйста, прочитайте абзац ниже и посчитайте, сколько раз встречаются слова "Амазонка", "река" и "вы". Пожалуйста, представьте результаты в формате "слово, количество появлений" с каждым словом на отдельной строке. Сортируйте строки в порядке количества появлений. Амазонка, завораживающее пространство чудес природы, является домом для легендарной реки Амазонки. Протекая через впечатляющие пейзажи, такие как тропический лес Амазонки, река прокладывает себе путь через Бразилию, Колумбию и Перу, давая жизнь бесчисленным существам. От могучих ягуаров, бродящих по джунглям Амазонки, до ярких ара, парящих над пологом, этот замечательный регион изобилует биоразнообразием. Глубоко в течениях реки великолепные розовые речные дельфины изящно скользят рядом с пираньями и электрическими угрями. Вдоль берегов реки вы найдете шумные города, такие как Манаус, где город встречается с дикой природой, и Икитос, ворота в сердце тропического леса Амазонки. По мере того, как вы продвигаетесь дальше, река Амазонка раскрывает скрытые драгоценные камни, такие как захватывающий архипелаг Анавилас, мозаику дикой природы. Слова «Амазонка», «река» и «вы» встречаются в абзаце 5 раз. Слова «Амазонка», «река» и «вы» встречаются в абзаце 5 раз. ========== ========== Определите названные сущности (люди, организации, местоположения), упомянутые в данной новостной статье. Пожалуйста, сгенерируйте словарь JSON, в котором перечислены названные сущности в трех отдельных группах на основе их типов сущностей. Ключ - это тип сущности, а значение - это список строк. Вчера Адамсон Эмерсон, генеральный директор Faraday, и Дитер Цетше, генеральный директор Daimler AG, объявили о планах строительства новой Гигафабрики в Берлине. Объект будет совместным предприятием между Faraday и Daimler, производящим электромобили и аккумуляторные батареи для обеих компаний, создавая тысячи рабочих мест в регионе. Эмерсон и Цетше заявили, что стратегическое расположение Берлина в сочетании с его квалифицированной рабочей силой и сильной инфраструктурой делает его идеальным выбором для расширения. Новая Гигафабрика направлена на удовлетворение растущего спроса на электромобили в Европе и вклад в устойчивое будущее. Генеральный директор Volkswagen Герберт Дисс приветствовал новость, заявив, что более тесное сотрудничество пойдет на пользу переходу автомобильной промышленности к электронной мобильности. { "имя": "Адамсон Эмерсон", "type": "Генеральный директор", "value": "Фарадей" } { "имя": "Дитер Цетше", "type": "Генеральный директор", "value": "Даймлер АГ" } { "имя": "Фарадей", "type": "Генеральный директор", "value": "Гигафабрика" } { "имя": "Даймлер АГ", "type": "Генеральный директор", "value": "Гигафабрика" } { "имя": "Гигафабрика", "type": "Генеральный директор", "value": "Фарадей" } { "имя": "Фарадей", "type": "Генеральный директор", "value": "Гигафабрика" } { "имя": "Гигафабрика", "type": "Генеральный директор", "value": "Фарадей" } { "имя": "Фарадей", "type": "Генеральный директор", "value": "Гигафабрика" } { "имя": "Гигафабрика", "type": "Генеральный директор", "value": "Фарадей" } { "имя": "Фарадей", "type": "Генеральный директор", "value": "Гигафабрика" } { "имя": "Гигафабрика", "type": "Генеральный директор", "value": "Фарадей" } { "имя": "Фарадей", "type": "Генеральный директор", "value": "Гигафабрика" } { ========== ========== Проанализируйте следующие отзывы клиентов из разных источников о трех разных смартфонах — новейшем iPhone, Samsung Galaxy и Google Pixel — и дайте общую оценку каждому телефону по шкале от 1 до 10. Рассмотрите следующие сложные и противоречивые отзывы: - Обзор новейшего iPhone от TechRadar. Новый iPhone — это потрясающий триумф инженерной мысли, который устанавливает новую планку производительности смартфонов и качества камеры. Тем не менее, инкрементный дизайн и высокая цена означают, что ему не хватает «вау-фактора» предыдущих iPhone. Тем не менее, его сила и интеллект не имеют себе равных. - Обзор последнего Samsung Galaxy от CNET: телефон Samsung Galaxy имеет множество достоинств, включая потрясающий экран, высокую производительность, длительное время автономной работы и впечатляющий набор опций камеры. Тем не менее, Bixby остается тусклым, смайлики AR не работают, а общий дизайн телефона не сильно изменился. В целом новый Galaxy — потрясающий телефон, но у него есть несколько досадных недостатков, которые мешают ему достичь истинного величия. - Обзор новейшего Google Pixel от The Verge: Pixel от Google сочетает в себе передовые характеристики, инновационное программное обеспечение на основе искусственного интеллекта и потрясающую камеру в стильном дизайне. Тем не менее, у телефона недостаточное время автономной работы, не хватает расширяемой памяти, а его производительность временами хромает, особенно с учетом его высокой цены. Если наиболее важными являются безупречное программное обеспечение, элитная фотография и помощь искусственного интеллекта от Google, вам понравится Pixel. Но общий опыт не такой всесторонний, как у некоторых конкурентов. Верните ответ в виде объекта JSON с общими оценками для каждого телефона из 10 с точностью до одного десятичного знака. Общая оценка: 7.5/10 Объяснение: В обзоре упоминается, что новый Samsung Galaxy имеет множество достоинств, включая потрясающий экран, высокую производительность, длительное время автономной работы и впечатляющий набор опций камеры. Тем не менее, Bixby остается тусклым, смайлики AR не работают, а общий дизайн телефона не сильно изменился. В целом новый Galaxy — потрясающий телефон, но у него есть несколько досадных недостатков, которые мешают ему достичь истинного величия. Поэтому я бы оценил его на 7,5/10. ========== ========== Дано множество сложных уравнений, извлеките все уникальные имена переменных из каждого уравнения. Верните результаты в виде строки JSON с одной строкой, выделенной для каждого уравнения. ``` 1) y = (3/4)x^3 - e^(2x) + sin(pi*x) - sqrt(7) 2) 2A - B/(3+C) * sum(N=1 to 5; ln(N)^2) = 5D*integral(a=0 to pi; cos(comb(N=1 to 10; N*a))) 3) E = m(c^2) + gamma*(v/d)/(-(alpha/2) + sqrt(beta^2 + (alpha/2)^2)) ``` ``` { "y": [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0. ========== ========== Учитывая следующие записи цен на акции, извлеките самые высокие и самые низкие цены закрытия для каждого месяца в 2022 году. Верните результаты в виде строки CSV с одной строкой, выделенной для каждого месяца. Date,Open,High,Low,Close,Volume 2022-01-01,150.02,155.28,148.50,153.80,15678900 2022-01-02,154.32,157.25,153.48,156.25,19874500 2022-02-01,160.50,163.28,159.50,161.80,14326700 2022-02-02,161.80,164.25,161.30,163.90,17689200 2022-03-01,165.40,168.35,163.10,166.80,16253400 2022-03-02,167.00,169.85,165.50,168.20,19568100 2022-03-03,168.00,170.45,166.20,169.00,19600000 2022-03-04,169.00,171.00,167.00,169.00,19600000 2022-03-05,170.00,172.00,168.00,169.00,19600000 2022-03-06,171.00,172.00,169.00,169.00,19600000 2022-03-07,172.00,173.00,169.00,169.00,19600000 2022-03-08,173.00,174.00,169.00,169.00,19600000 2022-03-09,174.00,175.00,169.00,169.00,19600000 2022-03-10,175.00,176.00,169.00,169.00,19600000 2022-03-11,176.00,177.00,169.00,169.00,19600000 2022-03-12,177.00,178.00,169.00,169.00,19600000 2022-03-13,178.00,17 ========== ========== Что такое суперпозиция в квантовой физике и как она связана с феноменом квантовой запутанности? Суперпозиция — это принцип квантовой механики, который относится к идее о том, что частицы могут существовать в нескольких состояниях одновременно. Это означает, что частицы могут существовать в двух или более состояниях одновременно, и что эти состояния могут быть связаны друг с другом. Феномен квантовой запутанности связан с идеей суперпозиции, потому что он относится к идее о том, что частицы могут существовать в нескольких состояниях одновременно, и что эти состояния могут быть связаны друг с другом. Квантовая запутанность — это явление, которое возникает, когда частицы взаимодействуют друг с другом в определенных ситуациях, что приводит к неожиданным результатам. Суперпозиция и квантовая запутанность связаны, потому что они оба связаны с идеей о том, что частицы могут существовать в нескольких состояниях одновременно. Суперпозиция — это принцип, который позволяет частицам существовать в нескольких состояниях одновременно, а квантовая запутанность — это явление, которое возникает, когда частицы взаимодействуют друг с другом в определенных ситуациях, что приводит к неожиданным результатам. Таким образом, суперпозиция и квантовая запутанность связаны, потому что они оба связаны с идеей о том, что частицы могут существовать в нескольких состояниях одновременно. Суперпозиция — это принцип, который позволяет частицам существовать в нескольких состояниях одновременно, а квантовая запутанность — это явление, которое возникает, когда частицы взаимодействуют друг с другом в определенных ситуациях, что приводит к неожиданным результа ========== ========== Рассмотрим спутник, который находится на круговой орбите вокруг Земли. Скорость спутника уменьшается. Что произойдет с радиусом орбиты спутника и периодом обращения? Пожалуйста, обоснуйте свой ответ, используя принципы физики. Когда спутник движется по круговой орбите вокруг Земли, его скорость уменьшается. Это означает, что спутник будет иметь меньшую скорость, чем если бы он двигался по прямой линии. В результате спутник будет иметь меньший радиус орбиты и меньший период обращения. Принцип Архимеда гласит, что на каждой круговой орбите вокруг Земли есть два набора перпендикулярных осей. Перпендикулярные оси — это оси вращения, которые всегда перпендикулярны друг другу. Перпендикулярные оси круга, также известные как оси орбиты, представляют собой оси вращения, которые всегда перпендикулярны друг другу. Когда спутник движется по круговой орбите, он будет испытывать силу, противодействующую его движению, называемую гравитационным притяжением. Эта сила заставляет спутник замедляться и, в конечном итоге, испытывать меньшую скорость, чем если бы он двигался по прямой линии. Следовательно, радиус орбиты спутника будет меньше, а период обращения будет меньше, если он будет двигаться по круговой орбите вокруг Земли. Это связано с тем, что сила гравитационного притяжения, действующая на спутник, будет препятствовать его движению и заставит его замедляться. Таким образом, когда спутник движется по круговой орбите вокруг Земли, его скорость уменьшается, а радиус орбиты и период обращения уменьшаются. Это связано с силой гравитационного притяжения, действующего на спутник. ========== ========== Фотосинтез является жизненно важным процессом для жизни на Земле. Не могли бы вы наметить два основных этапа фотосинтеза, в том числе, где они происходят в хлоропласте, и основные входы и выходы для каждого этапа? Фотосинтез можно разделить на два основных этапа: светозависимые реакции и светонезависимые реакции. 1. Светозависимые реакции: - Это происходит в тилакоидных мембранах хлоропластов. - Эти реакции включают поглощение световой энергии пигментами, такими как хлорофилл, и преобразование ее в химическую энергию в форме АТФ и НАДФН. - Энергия, высвобождаемая в ходе светозависимых реакций, используется для питания светонезависимых реакций. 2. Светонезависимые реакции: - Это происходит в строме хлоропластов. - Эти реакции включают фиксацию углекислого газа в органические молекулы, которые затем используются для синтеза глюкозы и других сахаров. - Энергия, высвобождаемая в ходе светонезависимых реакций, используется для питания светозависимых реакций. В целом, светозависимые реакции являются основными входами в светонезависимые реакции, которые являются основными выходами фотосинтеза. ========== ========== Что такое центральная догма молекулярной биологии? Какие процессы в ней задействованы? Кто это назвал? Центральной догмой молекулярной биологии является концепция ограничения наличия, которое относится к идее о том, что молекулы имеют ограниченное количество возможных форм и расположений. Эта концепция была впервые предложена Джоном Дальтоном в 1889 году и с тех пор стала центральной догмой молекулярной биологии. Процессы, связанные с центральной догмой молекулярной биологии, включают: 1. Ограничение наличия: это означает, что молекулы имеют ограниченное количество возможных форм и расположений. Например, молекула воды имеет только одну форму, и она может существовать только в определенном пространстве. 2. Ограничение размера: это означает, что молекулы имеют ограниченное количество возможных размеров. Например, молекула воды имеет только один атом водорода и один атом кислорода, и она может существовать только в определенном пространстве. 3. Ограничение формы: это означает, что молекулы имеют ограниченное количество возможных форм. Например, молекула воды имеет только одну форму, и она может существовать только в определенном пространстве. 4. Ограничение энергии: это означает, что молекулы имеют ограниченное количество возможных энергий. Например, молекула воды имеет только одну форму, и она может существовать только в определенном пространстве. 5. Ограничение химических реакций: это означает, что молекулы имеют ограниченное количество возможных химических реакций. Например, молекула воды имеет только одну форму, и она может существовать только в определенном пространстве. В целом, центральная догма молекулярной биологи ========== ========== Опишите процесс и напишите сбалансированное химическое уравнение для реакции, которая происходит, когда твердый карбонат кальция реагирует с соляной кислотой с образованием водного хлористого кальция, углекислого газа и воды.Какой тип реакции это, и какие наблюдения могут указывать на то, что реакция происходит? Объяснение: Эта реакция является экзотермической, что означает, что она выделяет тепло и свет. Это также означает, что продукты реакции будут горячими и могут вызвать повреждение или ожог. Процесс: 1. Твердый карбонат кальция (CaCO3) растворяется в воде (H2O) с образованием карбонат-карбонатового соединения (CaCO3(OH)2). 2. Эта карбонат-карбонатовая смесь нагревается до высокой температуры, в результате чего карбонат и карбонатная кислота вступают в реакцию. 3. В результате реакции образуются водяной пар (H2O) и углекислый газ (CO2). 4. Водяной пар выбрасывается в атмосферу, а углекислый газ выбрасывается в воздух в виде газа. Наблюдения: - Образование водяного пара и углекислого газа: это указывает на то, что реакция протекает экзотермически, что означает выделение тепла и света. - Выделение углекислого газа: это указывает на то, что продукты реакции могут вызывать повреждение или ожог. - Образование карбонат-карбонатового соединения: это указывает на то, что продукты реакции могут быть вредны для живых организмов. Таким образом, эта реакция является экзотермической, в результате которой образуются водяной пар и углекислый газ, а также карбонат-карбонатовое соединение. Продукты реакции могут быть вредны для живых организмов. ========== ========== Пожалуйста, объясните различия между экзотермическими и эндотермическими реакциями и укажите критерии, которые вы использовали для их разделения. Экзотермические реакции выделяют тепло и свет, а эндотермические реакции поглощают тепло и свет. Эндотермические реакции требуют ввода энергии для протекания, в то время как экзотермические реакции требуют выделения энергии. Чтобы определить критерии, которые вы использовали для разделения экзотермических и эндотермических реакций, я использовал следующие критерии: 1. Выделяется ли тепло и свет: экзотермические реакции выделяют тепло и свет, а эндотермические реакции поглощают тепло и свет. 2. Требуется входная энергия: эндотермические реакции требуют ввода энергии для протекания, в то время как экзотермические реакции требуют выделения энергии. 3. Требуется выделение энергии: экзотермические реакции требуют выделения энергии, в то время как эндотермические реакции не требуют выделения энергии. Следовательно, экзотермические реакции выделяют тепло и свет, а эндотермические реакции поглощают тепло и свет. Эндотермические реакции требуют ввода энергии для протекания, в то время как экзотермические реакции требуют выделения энергии. Экзотермические реакции требуют выделения энергии, в то время как эндотермические реакции не требуют выделения энергии. ========== ========== Город Вега намерен построить мост, который перекинется через реку Вегона, покрывая расстояние в 1,8 километра. Предлагаемое местоположение находится в сейсмически активной области, которая пережила несколько землетрясений высокой магнитуды. Учитывая эти обстоятельства, каков будет лучший подход к строительству моста? Чтобы обеспечить безопасность и устойчивость моста, необходимо учитывать несколько факторов. Во-первых, мост должен быть спроектирован таким образом, чтобы выдерживать силу землетрясений, вызванных сейсмической активностью в этом районе. Это означает, что мост должен быть спроектирован таким образом, чтобы выдерживать силу землетрясений, вызванных сейсмической активностью в этом районе. Во-вторых, мост должен быть спроектирован таким образом, чтобы выдерживать силу землетрясений, вызванных сейсмической активностью в этом районе. Это означает, что мост должен быть спроектирован таким образом, чтобы выдерживать силу землетрясений, вызванных сейсмической активностью в этом районе. В-третьих, мост должен быть спроектирован таким образом, чтобы выдерживать силу землетрясений, вызванных сейсмической активностью в этом районе. Это означает, что мост должен быть спроектирован таким образом, чтобы выдерживать силу землетрясений, вызванных сейсмической активностью в этом районе. В-четвертых, мост должен быть спроектирован таким образом, чтобы выдерживать силу землетрясений, вызванных сейсмической активностью в этом районе. Это означает, что мост должен быть спроектирован таким образом, чтобы выдерживать силу землетрясений, вызванных сейсмической активностью в этом районе. В-пятых, мост должен быть спроектирован таким образом, чтобы выдерживать силу землетрясений, вызванных сейсмической активностью в этом районе. Это означает, что мост должен быть спроектирован таким образом, чтобы выдерживать ========== ========== Вам было поручено спроектировать систему водяного отопления на солнечной энергии для жилого дома. Опишите ключевые компоненты и соображения, которые вы включите в свой дизайн. Разработайте пятиэтапный рабочий процесс. 1. Исследование и анализ: Первым шагом в разработке системы водяного отопления на солнечной энергии является проведение тщательного исследования и анализа. Это включает в себя понимание различных типов солнечных панелей, их эффективности, энергопотребления и воздействия на окружающую среду. Кроме того, необходимо учитывать такие факторы, как размер и расположение солнечных панелей, а также наличие подходящих мест для установки. 2. Дизайн: После завершения исследований и анализа следующим шагом будет разработка плана системы водяного отопления на солнечной энергии. Это включает в себя определение размера и формы солнечных батарей, размещения панелей и размещения системы. План должен учитывать такие факторы, как размер и вес системы, а также любые потенциальные препятствия, такие как высокая степень наклона или непредсказуемые погодные условия. 3. Экономический анализ: После того, как дизайн будет завершен, следующим шагом будет экономический анализ системы. Это включает в себя оценку стоимости материалов, труда и затрат на установку. Экономический анализ также должен учитывать такие факторы, как стоимость солнечных панелей, стоимость установки и стоимость технического обслуживания. 4. Консультации и проверки: После завершения экономического анализа следующим шагом будет консультация и проверки системы. Это включает в себя консультации с экспертами в области проектирования, технического обслуживания и финансовых вопросов. Консультации помогут убедиться, что система соответствует требованиям и что все факторы, ========== ========== Пожалуйста, опишите концепцию машинного обучения. Не могли бы вы подробнее рассказать о различиях между обучением с учителем, обучением без учителя и обучением с подкреплением? Приведите реальные примеры каждого из них. Обучение с учителем: Обучение с учителем включает в себя обучение учащихся в классе с помощью инструктора, который дает инструкции и ответы на вопросы. Инструктор обычно является учителем или другим специалистом в области образования, который обучает учащихся и помогает им понять концепции. Обучение с учителем может быть эффективным способом обучения, поскольку оно позволяет учащимся учиться в своем собственном темпе и при необходимости получать дополнительную помощь. Обучение без учителя: Обучение без учителя включает в себя обучение учащихся в классе без присутствия учителя. Этот метод обучения часто используется в школах, где учителя недоступны или не могут быть наняты. Обучение без учителя может быть эффективным способом обучения, поскольку оно позволяет учащимся учиться самостоятельно и при необходимости получать дополнительную помощь. Обучение с подкреплением: Обучение с подкреплением включает в себя обучение учащихся в классе с помощью подкрепления, которое может быть в форме поощрений, наград или других форм поощрения. Подкрепление может помочь учащимся понять, что правильное поведение приведет к положительным результатам, и может мотивировать их продолжать хорошо учиться. Обучение с подкреплением может быть эффективным способом обучения, поскольку оно позволяет учащимся учиться в своем собственном темпе и при необходимости получать дополнительную помощь. В заключение, обучение с учителем и обучение без учителя являются эффективными методами обучения, но они различаются по своему подходу и результатам. Обучение с учителем позволяет учащимся учиться в своем собственном темпе и при необходимости получать дополнительную помощ ========== ========== Как Альпы и Рейн повлияли на поселения и сельское хозяйство в Западной Европе? Напишите три примера. 1. Альпы: Альпы, расположенные в Центральной Европе, оказали значительное влияние на поселения и сельское хозяйство в Западной Европе. В 10 веке Альпы были домом для нескольких небольших поселений, в том числе в регионах, которые сейчас известны как Франция, Бельгия и Швейцария. Эти поселения были в основном земледельческими и населялись людьми, которые полагались на природные ресурсы региона для своего существования. Одним из наиболее значительных влияний Альп на поселения и сельское хозяйство в Западной Европе было их влияние на развитие сельского хозяйства. Альпы были домом для нескольких ранних сельскохозяйственных обществ, таких как кельтское поселение в регионе современной Франции, которое было основано в 10 веке. Эти ранние общества были в основном земледельческими и полагались на природные ресурсы региона для своего существования. Еще одним важным влиянием Альп на поселения и сельское хозяйство в Западной Европе было их влияние на развитие торговли и коммерции. Альпы были домом для нескольких ранних торговых сетей, таких как торговля ветряными мечими в регионе современной Франции, которая была основана в 10 веке. Эти ранние торговые сети были в основном земледельческими и полагались на природные ресурсы региона для своего существования. 3. Рейн: Рейн, расположенный в Западной Европе, также оказал значительное влияние на поселения и сельское хозяйство в регионе. В 10 веке Рейн был домом для нескольких небольших поселений, в том числе в регионах, которые сейчас известны как Германия, Бельгия и Нидерланды. Эти поселения были в основном земледельческими и населялись людьми, которые полагались ========== ========== Дайте представление о корреляции между экономическими показателями, такими как ВВП, инфляция и уровень безработицы. Объясните, как фискальная и денежно-кредитная политика влияет на эти показатели. Фискальная и денежно-кредитная политика играют решающую роль в корреляции между экономическими показателями, такими как ВВП, инфляция и уровень безработицы. Фискальная политика относится к решениям, принимаемым правительствами для сокращения или увеличения расходов, в то время как денежно-кредитная политика относится к решениям, принимаемым центральными банками для управления денежной массой и обеспечения стабильности цен. Фискальная политика может оказать значительное влияние на экономические показатели, такие как ВВП, инфляция и уровень безработицы. Например, сокращение государственных расходов может привести к снижению ВВП, поскольку государственные расходы составляют значительную часть ВВП. Точно так же увеличение государственных расходов может привести к увеличению инфляции, поскольку государственные расходы составляют значительную часть инфляции. С другой стороны, денежно-кредитная политика может оказать положительное влияние на экономические показатели, такие как ВВП, инфляция и уровень безработицы. Например, денежно-кредитная политика может помочь стабилизировать денежную массу и обеспечить стабильность цен, что может привести к увеличению ВВП и снижению инфляции. В заключение, фискальная и денежно-кредитная политика играют решающую роль в корреляции между экономическими показателями, такими как ВВП, инфляция и уровень безработицы. Фискальная политика может оказать негативное влияние на эти показатели, в то время как денежно-кредитная политика может оказать положительное влияние. ========== ========== Как этапы жизни формируют наше понимание времени и смертности? Эти этапы жизни, также известные как этапы жизни, представляют собой различные стадии развития человека, которые происходят в течение жизни человека. Каждая стадия представляет собой отдельный период времени, и они определяются различными факторами, такими как генетика, окружающая среда и образ жизни. 1. Развитие плода: этап жизни, на котором плод развивается в утробе матери. Этот этап характеризуется быстрым ростом и развитием, а также развитием нервной системы. 2. Рождение: этап жизни, на котором ребенок рождается и начинает свою жизнь. Этот этап характеризуется быстрым ростом и развитием, а также развитием нервной системы. 3. Младенчество: этап жизни, на котором ребенок находится в утробе матери и получает питательные вещества и кислород от матери. Этот этап характеризуется быстрым ростом и развитием, а также развитием нервной системы. 4. Подростковый возраст: этап жизни, на котором подросток переживает физические и эмоциональные изменения, включая начало полового созревания и развитие личности. 5. Средний возраст: этап жизни, на котором подросток переживает физические и эмоциональные изменения, включая начало полового созревания и развитие личности. 6. Пожилой возраст: этап жизни, на котором пожилой человек переживает физические и эмоциональные изменения, включая начало старости и развитие личности. 7. Смерть: этап жизни, на котором человек умирает. В заключение, этапы жизни формируют наше понимание времени и смертности, поскольку они определяют различные этапы развития человека и то, как они влияют на наше понимание жизни и смерти. ========== ========== Обсудите антимонопольное законодательство и его влияние на рыночную конкуренцию. Сравните антимонопольное законодательство в США и Китае вместе с некоторыми тематическими исследованиями. Антимонопольное законодательство представляет собой набор законов и правил, направленных на предотвращение антиконкурентной практики в различных отраслях и секторах. Он направлен на защиту потребителей, улучшение качества продукции и повышение эффективности. Антимонопольное законодательство играет решающую роль в формировании рыночной конкуренции и обеспечении справедливого распределения ресурсов между предприятиями. В Соединенных Штатах антимонопольное законодательство регулирует различные отрасли, включая телекоммуникации, финансы, здравоохранение и технологии. Он охватывает широкий спектр антиконкурентных практик, таких как монопольное владение, доминирование и дискриминация. Антимонопольное законодательство в США оказало значительное влияние на формирование рыночной конкуренции, предотвратив несправедливое преимущество и защищая потребителей от высоких цен. В Китае антимонопольное законодательство регулирует различные отрасли, включая технологии, здравоохранение и финансы. Он охватывает широкий спектр антиконкурентных практик, таких как монопольное владение, доминирование и дискриминация. Антимонопольное законодательство в Китае оказало значительное влияние на формирование рыночной конкуренции, предотвратив несправедливое преимущество и защищая потребителей от высоких цен. Тематические исследования: 1. Как антимонопольное законодательство влияет на рыночную конкуренцию? 2. Каковы некоторые из распространенных антиконкурентных практик в США и Китае? 3 ========== ========== Создайте план урока, который интегрирует приемы драмы, пантомимы или театра в урок истории. Продолжительность: 3 занятия (каждое по 45 минут) в течение 3 дней. Тема: Опиумные войны между Китаем и Великобританией Класс: 9-10 План урока: 1. Введение: а. Обсудите причины и последствия опиумных войн между Китаем и Великобританией. б. Познакомьте учащихся с различными драматическими приемами, такими как пантомима, пантомима и пантомима. в. Поощряйте учащихся к критическому мышлению и анализу ситуации. 2. Исторический контекст: а. Обсудите исторический фон опиумных войн между Китаем и Великобританией, включая причины, последствия и влияние на регион. б. Поощряйте учащихся понять роль Китая и Великобритании в конфликте. 3. Теория драмы: а. Объясните различные драматические приемы, упомянутые в пункте 1, и их значение в контексте опиумных войн. б. Поощряйте учащихся применять эти приемы в реальных ситуациях. 4. Примеры: а. Приведите примеры опиумных войн, такие как опиумный кризис 1839 года и опиумный кризис 1856 года. б. Поощряйте учащихся анализировать и интерпретировать эти примеры, чтобы понять исторический контекст и последствия. 5. Заключение: а. Обсудите значение опиумных войн для региона и мира. б. Поощряйте учащихся поразмышлять над драматическими приемами и их влиянием на историю. 6. Заключительные замечания: а. Поощряйте учащихся высказать свое мнение о драматических приемах и их значении в контексте о ========== ========== Поделитесь идеями для адаптации шедевров искусства в интерактивный опыт для детей. Перечислите 5 конкретных произведений искусства и связанных с ними идей. 1. «Мона Лиза» Леонардо да Винчи. 2. «Портрет Арианны» Джона Вермеера. 3. «Портрет Арианны» Джона Вермеера. 4. «Портрет Арианны» Джона Вермеера. 5. «Портрет Арианны» Джона Вермеера. Эти шедевры искусства можно адаптировать для создания интерактивного опыта для детей, предоставляя им возможность узнавать о предмете, историческом контексте и техниках. 1. «Мона Лиза» Леонардо да Винчи. - Идея: познакомьте детей с историей и значением картины, а также с техникой, используемой художником. - Возможности: дети могут узнать о том, что Моне Лиза — одна из самых известных картин в мире, о том, как она была создана, и о том, как художник использовал цвет и композицию для создания визуального эффекта. 2. «Портрет Арианны» Джона Вермеера. - Идея: познакомьте детей с историей и значением картины, а также с техникой, используемой художником. - Возможности: дети могут узнать о том, что Арианна была известной актрисой, а Джон Вермеер был известным художником, и о том, как он использовал свой опыт в искусстве, чтобы создать потрясающую картину. 3. «Портрет Арианны» Джона Вермеера. - Идея: познакомьте детей с историей и значением картины, а также с техникой, используемой художником. - Возможности: дети могут узнать о том, что Арианна была известной актрисой, а Джон Вермеер был известным художником, и о том ========== ========== Объясните, что такое ошибка базовой ставки, и приведите пять конкретных примеров того, как политики используют ее в своих кампаниях. Ошибка базовой ставки — это политическая стратегия, используемая политиками для привлечения внимания к определенной группе или проблеме. Он включает в себя использование негативных предложений или идей, чтобы заручиться поддержкой определенной группы или проблемы. Ошибка базовой ставки часто используется в политических кампаниях, чтобы привлечь внимание к проблемам, с которыми сталкивается определенная группа или сообщество, и привлечь их к действиям по решению этих проблем. Пять конкретных примеров того, как политики используют ошибку базовой ставки в своих кампаниях: 1. «Я не согласен с вами, но я ценю ваши идеи». Этот пример использует ошибку базовой ставки, чтобы заручиться поддержкой определенной группы или проблемы. В нем политик признает, что у другого человека разные мнения, но все же призывает его к действию по решению проблемы. 2. «Я не согласен с вами, но я ценю ваши идеи». Этот пример также использует ошибку базовой ставки, чтобы заручиться поддержкой определенной группы или проблемы. В нем политик признает, что у другого человека разные мнения, но все же призывает его к действию по решению проблемы. 3. «Я не согласен с вами, но я ценю ваши идеи». Этот пример использует ошибку базовой ставки, чтобы заручиться поддержкой определенной группы или проблемы. В нем политик признает, что у другого человека разные мнения, но все же призывает его к действию по решению проблемы. 4. «Я не согласен с вами, но я ценю ваши идеи». Этот пример также использует ошибку базовой ставки, чтобы заручиться ========== ========== Опишите пять ключевых принципов оценки аргумента в аналитическом письме. Пять ключевых принципов оценки аргумента в аналитическом письме: 1. Определите основную идею или тему аргумента. 2. Определите, что делает аргумент убедительным или неубедительным. 3. Оцените, является ли аргумент логически обоснованным и обоснованным. 4. Оцените, является ли аргумент ясным и кратким. 5. Оцените, является ли аргумент эффективным и эффективным. Принцип 1: Определите основную идею или тему аргумента. Принцип 2: Определите, что делает аргумент убедительным или неубедительным. Принцип 3: Оцените, является ли аргумент логически обоснованным и обоснованным. Принцип 4: Оцените, является ли аргумент ясным и кратким. Принцип 5: Оцените, является ли аргумент эффективным и эффективным. Принцип 6: Оцените, является ли аргумент убедительным или неубедительным. Принцип 7: Оцените, является ли аргумент эффективным и эффективным. Принцип 8: Оцените, является ли аргумент убедительным или неубедительным. Принцип 9: Оцените, является ли аргумент эффективным и эффективным. Принцип 10: Оцените, является ли аргумент убедительным или неубедительным. Принцип 11: Оцените, является ли аргумент эффективным и эффективным. Принцип 12: Оцените, является ли аргумент убедительным или неубедительным. П ========== ========== Какие методы использовал Сократ, чтобы бросить вызов господствующим идеям своего времени? Сократ, греческий философ, живший в V веке до нашей эры, бросил вызов господствующим идеям своего времени, используя различные методы. Он был известен своими критическими мыслями и способностью анализировать и интерпретировать различные идеи и концепции. Одним из наиболее заметных методов, которые использовал Сократ, была его система критического мышления. Он считал, что разум должен быть свободным от ограничений и ограничений человеческого разума, и что он должен быть открыт для новых идей и перспектив. Сократ считал, что критическое мышление — это способность критически оценивать и анализировать идеи и аргументы, и что оно может помочь людям принимать более обоснованные решения и решать сложные проблемы. Сократ также использовал ряд других методов, чтобы бросить вызов господствующим идеям своего времени. Например, он использовал логику и дедукцию для анализа и интерпретации различных идей и концепций. Он также использовал эмпирические наблюдения и эксперименты для проверки и подтверждения своих идей и теорий. В целом Сократ был новаторским мыслителем, который бросил вызов господствующим идеям своего времени и помог сформировать философский и научный ландшафт на века вперед. ========== ========== Каковы некоторые нормы делового этикета при ведении бизнеса в Японии? В Японии существуют определенные нормы этикета при ведении бизнеса, которые охватывают различные аспекты поведения и поведения. Вот некоторые из наиболее важных норм: 1. Этический бизнес: это означает, что компания должна действовать в соответствии с моральными и этическими принципами, а не только с законом. Это включает в себя честность, уважение и добросовестность. 2. Уважение к клиентам: компания должна относиться к клиентам с уважением и добротой. Это включает в себя предоставление информации о продуктах или услугах, ответы на вопросы и удовлетворение потребностей клиентов. 3. Уважение к сотрудникам: компания должна относиться к своим сотрудникам с уважением и добротой. Это включает в себя предоставление им надлежащего оборудования и ресурсов, а также создание благоприятной рабочей среды. 4. Этическое поведение: это означает, что компания должна действовать в соответствии с моральными и этическими принципами, а не только с законом. Это включает в себя соблюдение закона и правил, а также соблюдение этических норм. 5. Этическое поведение: это означает, что компания должна действовать в соответствии с моральными и этическими принципами, а не только с законом. Это включает в себя соблюдение закона и правил, а также соблюдение этических норм. 6. Этическое поведение: это означает, что компания должна действовать в соответствии с моральными и этическими принципами, а не только с законом. Это включает в себя соблюдение закона и правил, а также соблюдение этических норм. 7. Этическое поведение: это означает, ========== ========== Предложите пять отмеченных наградами документальных фильмов с краткими справочными описаниями для начинающих кинематографистов для изучения. 1. «Великий Гэтсби» (1946) — этот классический фильм режиссера Фрэнка Капры с Фрэнком Синатрой и Джейн Рассел в главных ролях рассказывает историю богатого человека, который становится одержимым идеей стать более скромным и простой. 2. «Убийство» (1946) — этот фильм режиссера Альфреда Хичкока с Кларком Гейблом и Кэри Грантом в главных ролях рассказывает историю молодого человека, который становится одержимым убийством и преступностью. 3. «Сияние» (1980) — этот фильм режиссера Стэнли Донена с Джеком Леммоном и Джули Эндрюс в главных ролях рассказывает историю молодого человека, который обнаруживает волшебный камень, который может сделать его самым богатым человеком в мире. 4. «Сияние» (1980) — этот фильм режиссера Стэнли Донена с Джеком Леммоном и Джули Эндрюс в главных ролях рассказывает историю молодого человека, который обнаруживает волшебный камень, который может сделать его самым богатым человеком в мире. 5. «Сияние» (1980) — этот фильм режиссера Стэнли Донена с Джеком Леммоном и Джули Эндрюс в главных ролях рассказывает историю молодого человека, который обнаруживает волшебный камень, который может сделать его самым богатым человеком в мире. Эти фильмы получили пять премий Оскар, в том числе за лучший фильм, лучшую режиссуру, лучшую мужскую роль, лучшую женскую роль и лучший адаптированный сценарий. Они считаются од ========== ========== ```
Sanjar-Javodov/gpt2-finetuned-wikitext2
Sanjar-Javodov
2023-12-09T12:54:30Z
4
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T12:33:11Z
--- license: mit base_model: gpt2 tags: - generated_from_keras_callback model-index: - name: Sanjar-Javodov/gpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Sanjar-Javodov/gpt2-finetuned-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 6.5003 - Validation Loss: 6.3526 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 7.3204 | 6.7797 | 0 | | 6.5003 | 6.3526 | 1 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
ailabintsev/gpt2-finetuned-wikitext2
ailabintsev
2023-12-09T12:54:12Z
4
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-09T12:33:08Z
--- license: mit base_model: gpt2 tags: - generated_from_keras_callback model-index: - name: ailabintsev/gpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ailabintsev/gpt2-finetuned-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 6.4931 - Validation Loss: 6.3491 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 7.3121 | 6.7599 | 0 | | 6.4931 | 6.3491 | 1 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
domenicrosati/deberta-v3-large-survey-main_passage_old_facts-rater-gpt4
domenicrosati
2023-12-09T12:47:46Z
3
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-09T12:05:52Z
--- license: mit tags: - generated_from_trainer model-index: - name: deberta-v3-large-survey-main_passage_old_facts-rater-gpt4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-survey-main_passage_old_facts-rater-gpt4 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8745 - Krippendorff: 0.8524 - Spearman: 0.8989 - Absolute Agreement: 0.6884 - Agreement Within One: 0.8920 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Krippendorff | Spearman | Absolute Agreement | Agreement Within One | |:-------------:|:-----:|:----:|:---------------:|:------------:|:--------:|:------------------:|:--------------------:| | No log | 1.0 | 50 | 1.9122 | -0.0152 | 0.1864 | 0.125 | 0.6111 | | No log | 2.0 | 100 | 1.9176 | -0.0371 | nan | 0.1111 | 0.6111 | | No log | 3.0 | 150 | 1.9227 | -0.0371 | nan | 0.1111 | 0.6111 | | No log | 4.0 | 200 | 1.9356 | -0.0954 | -0.1379 | 0.1111 | 0.6389 | | No log | 5.0 | 250 | 1.9920 | -0.2978 | -0.0494 | 0.0972 | 0.8472 | | No log | 6.0 | 300 | 2.1314 | -0.5514 | nan | 0.1389 | 1.0 | | No log | 7.0 | 350 | 2.1777 | -0.5514 | nan | 0.1389 | 1.0 | | No log | 8.0 | 400 | 2.1234 | -0.3703 | -0.0285 | 0.1528 | 0.9722 | | No log | 9.0 | 450 | 2.2162 | 0.0438 | 0.2051 | 0.1667 | 0.8889 | | 1.668 | 10.0 | 500 | 2.3107 | 0.1264 | 0.1592 | 0.1806 | 0.8056 | | 1.668 | 11.0 | 550 | 2.4563 | 0.1237 | 0.2624 | 0.1944 | 0.9028 | | 1.668 | 12.0 | 600 | 2.3407 | 0.2258 | 0.3463 | 0.1944 | 0.8889 | | 1.668 | 13.0 | 650 | 2.4742 | 0.2135 | 0.3764 | 0.1806 | 0.9167 | | 1.668 | 14.0 | 700 | 2.4811 | 0.2573 | 0.3480 | 0.1944 | 0.9028 | | 1.668 | 15.0 | 750 | 2.7594 | 0.1548 | 0.3436 | 0.2639 | 0.9306 | | 1.668 | 16.0 | 800 | 2.7162 | 0.1895 | 0.2918 | 0.1944 | 0.9028 | | 1.668 | 17.0 | 850 | 3.1360 | 0.1366 | 0.3226 | 0.2083 | 0.9306 | | 1.668 | 18.0 | 900 | 3.3573 | -0.0208 | 0.3147 | 0.2361 | 0.9583 | | 1.668 | 19.0 | 950 | 3.1412 | 0.0470 | 0.3046 | 0.2222 | 0.9306 | | 0.9188 | 20.0 | 1000 | 3.2781 | 0.0503 | 0.3296 | 0.2222 | 0.9444 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.12.1
nafizshahriar/ppo-LunarLander-v2
nafizshahriar
2023-12-09T12:47:22Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-09T12:41:22Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 258.76 +/- 36.00 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
arthd24/wanderwise_classification_1
arthd24
2023-12-09T12:34:01Z
4
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-07T12:40:23Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: arthd24/wanderwise_classification_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arthd24/wanderwise_classification_1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4400 - Validation Loss: 0.7651 - Train Accuracy: 0.7455 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6684, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.3866 | 0.8905 | 0.7240 | 0 | | 0.6598 | 0.7516 | 0.7312 | 1 | | 0.4400 | 0.7651 | 0.7455 | 2 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_RandomError0percent_Seed101
behzadnet
2023-12-09T12:30:24Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2023-12-09T12:30:21Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0