modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-13 06:30:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 556
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-13 06:27:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Benphil/CoT-multiDomain-Summ
|
Benphil
| 2024-06-10T16:08:30Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T09:43:33Z |
---
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
model-index:
- name: CoT-multiDomain-Summ
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoT-multiDomain-Summ
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 438 | 1.2374 |
| 4.18 | 2.0 | 876 | 1.1642 |
| 1.1654 | 3.0 | 1314 | 1.1482 |
| 1.0725 | 4.0 | 1752 | 1.1456 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
matthieulel/vit-base-patch16-384-finetuned-galaxy10-decals
|
matthieulel
| 2024-06-10T16:06:56Z | 276 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:google/vit-base-patch16-384",
"base_model:finetune:google/vit-base-patch16-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-10T13:24:41Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-384
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vit-base-patch16-384-finetuned-galaxy10-decals
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-384-finetuned-galaxy10-decals
This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5422
- Accuracy: 0.8613
- Precision: 0.8600
- Recall: 0.8613
- F1: 0.8596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.5894 | 0.99 | 31 | 1.2732 | 0.5744 | 0.5409 | 0.5744 | 0.5481 |
| 0.8001 | 1.98 | 62 | 0.6184 | 0.7976 | 0.7934 | 0.7976 | 0.7880 |
| 0.6895 | 2.98 | 93 | 0.5823 | 0.8067 | 0.7991 | 0.8067 | 0.7955 |
| 0.6259 | 4.0 | 125 | 0.4910 | 0.8433 | 0.8427 | 0.8433 | 0.8368 |
| 0.556 | 4.99 | 156 | 0.4874 | 0.8467 | 0.8465 | 0.8467 | 0.8465 |
| 0.5116 | 5.98 | 187 | 0.4734 | 0.8546 | 0.8569 | 0.8546 | 0.8518 |
| 0.4877 | 6.98 | 218 | 0.4539 | 0.8461 | 0.8429 | 0.8461 | 0.8428 |
| 0.4383 | 8.0 | 250 | 0.4716 | 0.8377 | 0.8399 | 0.8377 | 0.8345 |
| 0.4267 | 8.99 | 281 | 0.4355 | 0.8602 | 0.8576 | 0.8602 | 0.8559 |
| 0.4022 | 9.98 | 312 | 0.4758 | 0.8377 | 0.8377 | 0.8377 | 0.8356 |
| 0.3811 | 10.98 | 343 | 0.4538 | 0.8495 | 0.8471 | 0.8495 | 0.8474 |
| 0.3612 | 12.0 | 375 | 0.4808 | 0.8439 | 0.8412 | 0.8439 | 0.8399 |
| 0.363 | 12.99 | 406 | 0.4751 | 0.8467 | 0.8502 | 0.8467 | 0.8458 |
| 0.3198 | 13.98 | 437 | 0.4800 | 0.8489 | 0.8497 | 0.8489 | 0.8450 |
| 0.3192 | 14.98 | 468 | 0.4834 | 0.8574 | 0.8580 | 0.8574 | 0.8570 |
| 0.3041 | 16.0 | 500 | 0.4879 | 0.8495 | 0.8500 | 0.8495 | 0.8443 |
| 0.2607 | 16.99 | 531 | 0.4958 | 0.8540 | 0.8529 | 0.8540 | 0.8523 |
| 0.2649 | 17.98 | 562 | 0.4927 | 0.8579 | 0.8570 | 0.8579 | 0.8562 |
| 0.2553 | 18.98 | 593 | 0.5095 | 0.8495 | 0.8473 | 0.8495 | 0.8474 |
| 0.2453 | 20.0 | 625 | 0.5162 | 0.8495 | 0.8467 | 0.8495 | 0.8467 |
| 0.2417 | 20.99 | 656 | 0.5375 | 0.8579 | 0.8573 | 0.8579 | 0.8543 |
| 0.241 | 21.98 | 687 | 0.5129 | 0.8568 | 0.8546 | 0.8568 | 0.8547 |
| 0.2257 | 22.98 | 718 | 0.5316 | 0.8596 | 0.8584 | 0.8596 | 0.8571 |
| 0.2087 | 24.0 | 750 | 0.5530 | 0.8512 | 0.8497 | 0.8512 | 0.8489 |
| 0.2196 | 24.99 | 781 | 0.5422 | 0.8613 | 0.8600 | 0.8613 | 0.8596 |
| 0.1975 | 25.98 | 812 | 0.5672 | 0.8529 | 0.8534 | 0.8529 | 0.8508 |
| 0.2135 | 26.98 | 843 | 0.5697 | 0.8523 | 0.8513 | 0.8523 | 0.8509 |
| 0.1946 | 28.0 | 875 | 0.5598 | 0.8557 | 0.8542 | 0.8557 | 0.8536 |
| 0.2006 | 28.99 | 906 | 0.5582 | 0.8591 | 0.8566 | 0.8591 | 0.8560 |
| 0.1968 | 29.76 | 930 | 0.5571 | 0.8591 | 0.8571 | 0.8591 | 0.8564 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
UdS-LSV/mcse-flickr-bert-base-uncased
|
UdS-LSV
| 2024-06-10T16:05:44Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"en",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-06-10T15:22:23Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- spearmanr
---
# MCSE: Multimodal Contrastive Learning of Sentence Embeddings (NAACL 2022)
Paper link: https://aclanthology.org/2022.naacl-main.436/
Github: https://github.com/uds-lsv/MCSE
Author list: Miaoran Zhang, Marius Mosbach, David Adelani, Michael Hedderich, Dietrich Klakow
## Model Details
- base model: [bert-base-uncased](google-bert/bert-base-uncased)
- training data: Wiki1M + Flicker30k
## Evaluation Results
| STS12 | STS13 | STS14 | STS15 | STS16 | STSBenchmark | SICKRelatedness | Avg. |
|:------:|:------:|:------:|:------:|:------:|:------------:|:---------------:|:------:|
| 71.63 | 82.13 | 75.94 | 84.63 | 77.50 | 79.96 | 72.12 | 77.70 |
|
KuanP/baseline_2024-06-10_11-43-48_fold_2
|
KuanP
| 2024-06-10T16:04:18Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T16:04:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gregorig/bert-base-uncased-finetuned
|
Gregorig
| 2024-06-10T16:03:39Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T21:43:08Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0943
- Accuracy: 0.51
- F1: 0.4933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.2987 | 1.0 | 51 | 1.2275 | 0.455 | 0.4242 |
| 1.1568 | 2.0 | 102 | 1.1175 | 0.515 | 0.4885 |
| 1.0445 | 3.0 | 153 | 1.0943 | 0.51 | 0.4933 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q4_0-GGUF
|
NikolayKozloff
| 2024-06-10T16:01:27Z | 8 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:grimjim/Llama-3-Steerpike-v1-OAS-8B",
"base_model:quantized:grimjim/Llama-3-Steerpike-v1-OAS-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T16:01:12Z |
---
license: llama3
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: grimjim/Llama-3-Steerpike-v1-OAS-8B
license_link: LICENSE
---
# NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q4_0-GGUF
This model was converted to GGUF format from [`grimjim/Llama-3-Steerpike-v1-OAS-8B`](https://huggingface.co/grimjim/Llama-3-Steerpike-v1-OAS-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/grimjim/Llama-3-Steerpike-v1-OAS-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q4_0-GGUF --hf-file llama-3-steerpike-v1-oas-8b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q4_0-GGUF --hf-file llama-3-steerpike-v1-oas-8b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q4_0-GGUF --hf-file llama-3-steerpike-v1-oas-8b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo NikolayKozloff/Llama-3-Steerpike-v1-OAS-8B-Q4_0-GGUF --hf-file llama-3-steerpike-v1-oas-8b-q4_0.gguf -c 2048
```
|
chreh/style_editor_gguf
|
chreh
| 2024-06-10T16:01:06Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-10T15:58:58Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** chreh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
badrex/xls-r-300-cv17-czech-adap-pl
|
badrex
| 2024-06-10T15:57:13Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-10T09:50:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
itspxsh/distilbert-base-uncased-finetuned-ner
|
itspxsh
| 2024-06-10T15:56:54Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-10T15:44:41Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9285872453498671
- name: Recall
type: recall
value: 0.9382481261886118
- name: F1
type: f1
value: 0.93339268821991
- name: Accuracy
type: accuracy
value: 0.9837800054013695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0598
- Precision: 0.9286
- Recall: 0.9382
- F1: 0.9334
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2492 | 1.0 | 878 | 0.0727 | 0.8991 | 0.9164 | 0.9077 | 0.9791 |
| 0.052 | 2.0 | 1756 | 0.0595 | 0.9232 | 0.9332 | 0.9282 | 0.9829 |
| 0.0313 | 3.0 | 2634 | 0.0598 | 0.9286 | 0.9382 | 0.9334 | 0.9838 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
yatharth97/T5-base-news-summarization
|
yatharth97
| 2024-06-10T15:46:56Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"finance-news",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-04-17T14:06:05Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
- summarization
- finance-news
model-index:
- name: t5-base-finance-news-summarization
results: []
---
# t5-base-finance-news-summarization
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) for the purpose of summarizing finance-related news articles.
## Model description
T5-Base Finance News Summarization is optimized for transforming lengthy financial news into concise summaries. This tool aids stakeholders in quickly understanding market dynamics and financial updates without reading full articles.
## Intended uses & limitations
The model is intended for use in financial sectors by analysts, economists, and journalists needing quick summaries of finance news. It may not perform well with general news or in highly technical or academic finance contexts.
## Training and evaluation data
Trained on a diverse collection of finance news articles from various reputable financial news sources, annotated with summaries to provide a comprehensive learning base.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Tokenizers 0.15.2
|
SidXXD/blend_factor_157
|
SidXXD
| 2024-06-10T15:45:16Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-06-07T09:05:01Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/blend_factor_157
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
oumaymaMb/Roberta_Text_Classification_v6
|
oumaymaMb
| 2024-06-10T15:44:36Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T14:49:25Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: Roberta_Text_Classification_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Roberta_Text_Classification_v6
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.023 | 1.0 | 279 | 0.2139 |
| 0.0967 | 2.0 | 558 | 0.0710 |
| 0.0004 | 3.0 | 837 | 0.0981 |
| 0.1198 | 4.0 | 1116 | 0.0474 |
| 0.1475 | 5.0 | 1395 | 0.1094 |
| 0.0008 | 6.0 | 1674 | 0.0379 |
| 0.2435 | 7.0 | 1953 | 0.0536 |
| 0.0001 | 8.0 | 2232 | 0.0765 |
| 0.0002 | 9.0 | 2511 | 0.0483 |
| 0.0002 | 10.0 | 2790 | 0.0406 |
| 0.0001 | 11.0 | 3069 | 0.0430 |
| 0.0001 | 12.0 | 3348 | 0.0399 |
| 0.0002 | 13.0 | 3627 | 0.0230 |
| 0.0002 | 14.0 | 3906 | 0.0353 |
| 0.0671 | 15.0 | 4185 | 0.0724 |
| 0.0154 | 16.0 | 4464 | 0.1768 |
| 0.0002 | 17.0 | 4743 | 0.0470 |
| 0.0001 | 18.0 | 5022 | 0.0451 |
| 0.2172 | 19.0 | 5301 | 0.0504 |
| 0.0128 | 20.0 | 5580 | 0.0676 |
| 0.0001 | 21.0 | 5859 | 0.1007 |
| 0.0001 | 22.0 | 6138 | 0.0799 |
| 0.0001 | 23.0 | 6417 | 0.0616 |
| 0.0 | 24.0 | 6696 | 0.0621 |
| 0.0 | 25.0 | 6975 | 0.0625 |
| 0.0 | 26.0 | 7254 | 0.0628 |
| 0.0 | 27.0 | 7533 | 0.0631 |
| 0.0 | 28.0 | 7812 | 0.0633 |
| 0.0 | 29.0 | 8091 | 0.0637 |
| 0.0 | 30.0 | 8370 | 0.0638 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Mourad/thinqai_awesome_mnist_model
|
Mourad
| 2024-06-10T15:44:05Z | 197 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"dataset:ylecun/mnist",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-10T14:38:01Z |
---
datasets:
- ylecun/mnist
metrics:
- accuracy
pipeline_tag: image-classification
---
|
miguelpezo/terceraprueba
|
miguelpezo
| 2024-06-10T15:43:41Z | 160 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T15:30:24Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: terceraprueba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# terceraprueba
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0397
- Accuracy: 0.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.6369 | 0.16 | 20 | 1.6191 | 0.25 |
| 1.6334 | 0.32 | 40 | 1.5859 | 0.24 |
| 1.6161 | 0.48 | 60 | 1.6090 | 0.26 |
| 1.6157 | 0.64 | 80 | 1.6140 | 0.155 |
| 1.62 | 0.8 | 100 | 1.6157 | 0.22 |
| 1.6311 | 0.96 | 120 | 1.6024 | 0.24 |
| 1.629 | 1.12 | 140 | 1.6102 | 0.165 |
| 1.6211 | 1.28 | 160 | 1.6152 | 0.22 |
| 1.6273 | 1.44 | 180 | 1.6083 | 0.22 |
| 1.6293 | 1.6 | 200 | 1.6101 | 0.22 |
| 1.6283 | 1.76 | 220 | 1.6101 | 0.25 |
| 1.6239 | 1.92 | 240 | 1.6072 | 0.245 |
| 1.6095 | 2.08 | 260 | 1.5872 | 0.27 |
| 1.6145 | 2.24 | 280 | 1.6071 | 0.22 |
| 1.6052 | 2.4 | 300 | 1.5858 | 0.235 |
| 1.5458 | 2.56 | 320 | 1.3771 | 0.4 |
| 1.4475 | 2.7200 | 340 | 1.4233 | 0.375 |
| 1.4089 | 2.88 | 360 | 1.4001 | 0.4 |
| 1.5145 | 3.04 | 380 | 1.4117 | 0.385 |
| 1.3812 | 3.2 | 400 | 1.4008 | 0.375 |
| 1.427 | 3.36 | 420 | 1.3099 | 0.42 |
| 1.378 | 3.52 | 440 | 1.2759 | 0.455 |
| 1.386 | 3.68 | 460 | 1.3181 | 0.4 |
| 1.2394 | 3.84 | 480 | 1.1192 | 0.47 |
| 1.1776 | 4.0 | 500 | 1.0973 | 0.47 |
| 1.0816 | 4.16 | 520 | 1.1384 | 0.42 |
| 1.1577 | 4.32 | 540 | 1.0555 | 0.51 |
| 1.0716 | 4.48 | 560 | 1.0644 | 0.51 |
| 1.0234 | 4.64 | 580 | 1.0456 | 0.535 |
| 0.9915 | 4.8 | 600 | 1.0481 | 0.5 |
| 0.9612 | 4.96 | 620 | 1.0397 | 0.54 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Hichemhadhri/nlpeople-mcq
|
Hichemhadhri
| 2024-06-10T15:41:19Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T13:08:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
iloncka/exp_5_old_bg-subs_1_v_5_eva02_tiny_patch14_224.mim_in22k_ep_60
|
iloncka
| 2024-06-10T15:40:58Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-06-03T14:51:27Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
yatharth97/llama2-7b
|
yatharth97
| 2024-06-10T15:40:25Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"chatbot",
"task-oriented",
"multi-turn-qa",
"English",
"fine-tuned",
"meta-llama2-7b",
"financial-reports",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-30T00:08:27Z |
---
library_name: transformers
tags: [chatbot, task-oriented, multi-turn-qa, English, fine-tuned, meta-llama2-7b, financial-reports]
---
# Model Card for Meta LLAMA2-7B Custom Task-Oriented Chatbot
This model is a fine-tuned version of Meta's LLAMA2-7B model, adapted to function as a task-oriented chatbot that processes and answers questions related to financial 10K reports.
## Model Details
### Model Description
Developed by Yatharth Mahesh Sant, this model is a causal language model fine-tuned from Meta's LLAMA2-7B to specifically handle queries and tasks associated with 10K financial reports. It is designed to assist financial analysts and stakeholders by providing detailed, accurate answers to inquiries about company performances, financial standings, and other key metrics contained within 10K reports.
- **Developed by:** Yatharth Mahesh Sant
- **Model type:** Causal LM
- **Language(s) (NLP):** English
- **Finetuned from:** meta/llama2-7b
- **Repository:** [Meta LLAMA2-7B](https://huggingface.co/meta-llama/Llama-2-7b)
## Uses
### Intended Use
This model is meant to be used as a task-oriented bot to interact with users querying about details in 10K financial reports, enhancing the efficiency of financial analysis and decision-making processes.
### Direct Use
The model can directly answer questions from financial reports, serving as an automated assistant to financial analysts, investors, and regulatory authorities who require quick, reliable interpretations of financial data.
### Downstream Use
The model can be integrated into financial analysis software, used to power internal data review tools in corporations, or serve as a support system in investor relations departments to automate responses to common shareholder inquiries.
### Out-of-Scope Use
This model is not designed for non-financial texts or languages other than English. It may not perform well in informal conversational settings or handle off-topic inquiries effectively.
## Bias, Risks, and Limitations
The model's performance and responses are based on the data it was trained on, which primarily includes structured financial texts. As such, it may inherit biases from this data or fail to comprehend nuanced questions not directly related to financial reporting.
### Recommendations
It is recommended that responses generated by this model be reviewed by a qualified financial analyst to confirm their accuracy before being used in critical decision-making processes.
## How to Get Started with the Model
To get started with this model, please refer to the specific deployment guides and API documentation provided in the repository linked above.
## Training Details
### Training Data
The model was fine-tuned on a comprehensive dataset comprising several years' worth of 10K reports from companies across various industries, annotated for key financial metrics and queries.
### Training Procedure
#### Preprocessing
Training data was preprocessed to normalize financial terminology and remove any non-relevant sections of the reports, focusing on the sections most pertinent to common queries.
#### Training Hyperparameters
The model was trained using a learning rate of 5e-5 with a batch size of 32 over 4 epochs, employing a transformer-based architecture optimized for natural language understanding tasks.
## Evaluation
### Testing Data
The model was evaluated on a separate validation set consisting of annotated 10K reports not seen during training to ensure it can generalize across different texts and query types.
### Metrics
Evaluation metrics included accuracy, F1 score, and a custom metric for response relevance to financial queries.
## Technical Specifications
### Model Architecture
The model employs a transformer-based architecture, leveraging attention mechanisms to focus on relevant parts of the text when generating responses.
### Compute Infrastructure
Training was conducted on cloud-based GPUs with support for high-throughput training sessions.
|
oneonlee/Llama2-7b-alpaca-Q3-ep2
|
oneonlee
| 2024-06-10T15:39:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T15:34:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBoraske/llama-2-13b-chat-reddit-AITA-benign-consenting
|
MattBoraske
| 2024-06-10T15:37:40Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T15:28:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kokodio/my_awesome_opus_books_model
|
kokodio
| 2024-06-10T15:35:28Z | 110 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T08:03:55Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0592
- Bleu: 10.0783
- Gen Len: 16.4672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|
| 1.3998 | 0.0131 | 100 | 1.1909 | 9.0438 | 16.6128 |
| 1.3745 | 0.0262 | 200 | 1.1892 | 8.9685 | 16.7342 |
| 1.3584 | 0.0393 | 300 | 1.1884 | 8.9498 | 16.6153 |
| 1.3615 | 0.0525 | 400 | 1.1913 | 8.94 | 16.5859 |
| 1.3417 | 0.0656 | 500 | 1.1818 | 8.8506 | 16.6169 |
| 1.3459 | 0.0787 | 600 | 1.1812 | 9.1565 | 16.6316 |
| 1.35 | 0.0918 | 700 | 1.1819 | 8.9922 | 16.5945 |
| 1.3244 | 0.1049 | 800 | 1.1749 | 8.9409 | 16.6778 |
| 1.3282 | 0.1180 | 900 | 1.1690 | 8.9618 | 16.5828 |
| 1.3198 | 0.1311 | 1000 | 1.1703 | 9.1664 | 16.6026 |
| 1.3359 | 0.1443 | 1100 | 1.1685 | 8.978 | 16.6677 |
| 1.3177 | 0.1574 | 1200 | 1.1654 | 8.9768 | 16.6347 |
| 1.3334 | 0.1705 | 1300 | 1.1615 | 8.9667 | 16.6148 |
| 1.3192 | 0.1836 | 1400 | 1.1635 | 9.1455 | 16.5879 |
| 1.315 | 0.1967 | 1500 | 1.1618 | 8.97 | 16.5452 |
| 1.309 | 0.2098 | 1600 | 1.1606 | 9.1667 | 16.6367 |
| 1.3052 | 0.2229 | 1700 | 1.1613 | 8.962 | 16.6047 |
| 1.3006 | 0.2361 | 1800 | 1.1535 | 9.042 | 16.6484 |
| 1.2999 | 0.2492 | 1900 | 1.1560 | 8.977 | 16.5513 |
| 1.2939 | 0.2623 | 2000 | 1.1553 | 9.0578 | 16.5996 |
| 1.3079 | 0.2754 | 2100 | 1.1505 | 9.1548 | 16.6438 |
| 1.3086 | 0.2885 | 2200 | 1.1521 | 8.9797 | 16.5493 |
| 1.2993 | 0.3016 | 2300 | 1.1498 | 9.1659 | 16.5727 |
| 1.2963 | 0.3147 | 2400 | 1.1454 | 9.1355 | 16.532 |
| 1.2894 | 0.3279 | 2500 | 1.1423 | 9.2378 | 16.5803 |
| 1.2914 | 0.3410 | 2600 | 1.1425 | 9.3786 | 16.6011 |
| 1.2898 | 0.3541 | 2700 | 1.1447 | 9.2694 | 16.5112 |
| 1.2883 | 0.3672 | 2800 | 1.1446 | 9.2671 | 16.561 |
| 1.2796 | 0.3803 | 2900 | 1.1407 | 9.3267 | 16.5528 |
| 1.2854 | 0.3934 | 3000 | 1.1403 | 9.1921 | 16.5838 |
| 1.2657 | 0.4065 | 3100 | 1.1375 | 9.1904 | 16.5727 |
| 1.2729 | 0.4197 | 3200 | 1.1396 | 9.1816 | 16.596 |
| 1.2782 | 0.4328 | 3300 | 1.1382 | 9.3068 | 16.5503 |
| 1.2784 | 0.4459 | 3400 | 1.1345 | 9.2616 | 16.5168 |
| 1.2687 | 0.4590 | 3500 | 1.1333 | 9.2731 | 16.5569 |
| 1.2802 | 0.4721 | 3600 | 1.1285 | 9.2272 | 16.5772 |
| 1.2693 | 0.4852 | 3700 | 1.1304 | 9.3535 | 16.5645 |
| 1.279 | 0.4983 | 3800 | 1.1343 | 9.3037 | 16.565 |
| 1.2678 | 0.5115 | 3900 | 1.1306 | 9.3029 | 16.6118 |
| 1.2579 | 0.5246 | 4000 | 1.1318 | 9.3173 | 16.6448 |
| 1.262 | 0.5377 | 4100 | 1.1282 | 9.3084 | 16.6199 |
| 1.2778 | 0.5508 | 4200 | 1.1258 | 9.4782 | 16.6032 |
| 1.2567 | 0.5639 | 4300 | 1.1246 | 9.3401 | 16.5965 |
| 1.2425 | 0.5770 | 4400 | 1.1293 | 9.4245 | 16.5671 |
| 1.2593 | 0.5901 | 4500 | 1.1228 | 9.2466 | 16.6037 |
| 1.2591 | 0.6033 | 4600 | 1.1220 | 9.3294 | 16.5925 |
| 1.2661 | 0.6164 | 4700 | 1.1255 | 9.333 | 16.5361 |
| 1.2446 | 0.6295 | 4800 | 1.1235 | 9.3146 | 16.5676 |
| 1.2563 | 0.6426 | 4900 | 1.1205 | 9.3765 | 16.5661 |
| 1.2416 | 0.6557 | 5000 | 1.1188 | 9.3549 | 16.5849 |
| 1.2605 | 0.6688 | 5100 | 1.1187 | 9.313 | 16.5767 |
| 1.253 | 0.6819 | 5200 | 1.1191 | 9.24 | 16.5407 |
| 1.2429 | 0.6951 | 5300 | 1.1178 | 9.1666 | 16.5549 |
| 1.2587 | 0.7082 | 5400 | 1.1167 | 9.26 | 16.5513 |
| 1.2432 | 0.7213 | 5500 | 1.1135 | 9.2584 | 16.5381 |
| 1.2422 | 0.7344 | 5600 | 1.1137 | 9.3422 | 16.5752 |
| 1.2581 | 0.7475 | 5700 | 1.1146 | 9.3159 | 16.5767 |
| 1.2451 | 0.7606 | 5800 | 1.1142 | 9.278 | 16.534 |
| 1.25 | 0.7737 | 5900 | 1.1140 | 9.3551 | 16.596 |
| 1.2435 | 0.7869 | 6000 | 1.1117 | 9.3174 | 16.561 |
| 1.2452 | 0.8000 | 6100 | 1.1112 | 9.3823 | 16.5706 |
| 1.2344 | 0.8131 | 6200 | 1.1120 | 9.3922 | 16.5508 |
| 1.2231 | 0.8262 | 6300 | 1.1092 | 9.3544 | 16.532 |
| 1.2449 | 0.8393 | 6400 | 1.1071 | 9.3757 | 16.5534 |
| 1.2154 | 0.8524 | 6500 | 1.1087 | 9.3746 | 16.5366 |
| 1.236 | 0.8655 | 6600 | 1.1083 | 9.3719 | 16.5554 |
| 1.2355 | 0.8787 | 6700 | 1.1088 | 9.4179 | 16.5701 |
| 1.2403 | 0.8918 | 6800 | 1.1079 | 9.3163 | 16.5407 |
| 1.2213 | 0.9049 | 6900 | 1.1062 | 9.3422 | 16.5605 |
| 1.2315 | 0.9180 | 7000 | 1.1067 | 9.4145 | 16.5615 |
| 1.2217 | 0.9311 | 7100 | 1.1062 | 9.4026 | 16.5452 |
| 1.2418 | 0.9442 | 7200 | 1.1053 | 9.3595 | 16.5564 |
| 1.2181 | 0.9573 | 7300 | 1.1058 | 9.3921 | 16.5737 |
| 1.214 | 0.9705 | 7400 | 1.1051 | 9.4053 | 16.5671 |
| 1.2135 | 0.9836 | 7500 | 1.1054 | 9.377 | 16.5615 |
| 1.2327 | 0.9967 | 7600 | 1.1051 | 9.3944 | 16.5625 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
medicalai/radfound
|
medicalai
| 2024-06-10T15:34:19Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-10T08:09:44Z |
---
license: apache-2.0
---
|
Reihaneh/wav2vec2_fy_common_voice_36
|
Reihaneh
| 2024-06-10T15:34:09Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T15:34:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DBangshu/New_GPT2_6
|
DBangshu
| 2024-06-10T15:27:04Z | 130 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-09T21:50:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DBangshu/Base_New_GPT2_6
|
DBangshu
| 2024-06-10T15:26:58Z | 200 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T15:26:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mnlp-2024/mcqa-gemma-lora
|
mnlp-2024
| 2024-06-10T15:26:39Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T15:20:30Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dan-kwiat/gguf-test
|
dan-kwiat
| 2024-06-10T15:26:00Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-10T15:20:02Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** dan-kwiat
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YongjieNiu/lora-adl-cat-100-1-500
|
YongjieNiu
| 2024-06-10T15:25:43Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-06-10T07:40:56Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: SDXL_model
instance_prompt: a photo of adl cat
widget:
- text: a photo of adl cat by the sea
output:
url: image_0.png
- text: a photo of adl cat by the sea
output:
url: image_1.png
- text: a photo of adl cat by the sea
output:
url: image_2.png
- text: a photo of adl cat by the sea
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - YongjieNiu/lora-adl-cat-100-1-500
<Gallery />
## Model description
These are YongjieNiu/lora-adl-cat-100-1-500 LoRA adaption weights for SDXL_model.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: VAE.
## Trigger words
You should use a photo of adl cat to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](YongjieNiu/lora-adl-cat-100-1-500/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
kunalksutar/flan-t5-large-multipurpose
|
kunalksutar
| 2024-06-10T15:25:36Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-08T12:09:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MohammadKarami/ernieMtask2
|
MohammadKarami
| 2024-06-10T15:23:36Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"ernie_m",
"text-classification",
"generated_from_trainer",
"base_model:susnato/ernie-m-base_pytorch",
"base_model:finetune:susnato/ernie-m-base_pytorch",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T15:22:50Z |
---
license: apache-2.0
base_model: susnato/ernie-m-base_pytorch
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model
This model is a fine-tuned version of [susnato/ernie-m-base_pytorch](https://huggingface.co/susnato/ernie-m-base_pytorch) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3594
- F1: 0.8030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.818 | 1.0 | 1469 | 0.7237 | 0.7282 |
| 0.5339 | 2.0 | 2938 | 0.6703 | 0.7587 |
| 0.3487 | 3.0 | 4407 | 0.7401 | 0.7683 |
| 0.2386 | 4.0 | 5876 | 0.7384 | 0.7901 |
| 0.143 | 5.0 | 7345 | 1.0072 | 0.7736 |
| 0.0819 | 6.0 | 8814 | 1.0898 | 0.7979 |
| 0.0438 | 7.0 | 10283 | 1.3199 | 0.7948 |
| 0.0166 | 8.0 | 11752 | 1.3594 | 0.8030 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
miguelpezo/segundaprueba
|
miguelpezo
| 2024-06-10T15:23:11Z | 162 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T15:01:06Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: segundaprueba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segundaprueba
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7005
- Accuracy: 0.455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.183 | 0.16 | 20 | 1.4843 | 0.35 |
| 1.34 | 0.32 | 40 | 1.4466 | 0.35 |
| 1.265 | 0.48 | 60 | 1.3036 | 0.39 |
| 1.3107 | 0.64 | 80 | 1.2656 | 0.415 |
| 1.0969 | 0.8 | 100 | 1.2302 | 0.455 |
| 1.1958 | 0.96 | 120 | 1.2676 | 0.465 |
| 1.0183 | 1.12 | 140 | 1.2523 | 0.45 |
| 0.8682 | 1.28 | 160 | 1.2620 | 0.495 |
| 0.9 | 1.44 | 180 | 1.2779 | 0.485 |
| 0.9613 | 1.6 | 200 | 1.5713 | 0.36 |
| 0.9667 | 1.76 | 220 | 1.3170 | 0.465 |
| 1.0284 | 1.92 | 240 | 1.2343 | 0.5 |
| 0.8681 | 2.08 | 260 | 1.2968 | 0.49 |
| 0.7134 | 2.24 | 280 | 1.4032 | 0.44 |
| 0.7311 | 2.4 | 300 | 1.3624 | 0.46 |
| 0.61 | 2.56 | 320 | 1.4416 | 0.44 |
| 0.7033 | 2.7200 | 340 | 1.5110 | 0.46 |
| 0.5881 | 2.88 | 360 | 1.3926 | 0.475 |
| 0.6666 | 3.04 | 380 | 1.3896 | 0.49 |
| 0.4299 | 3.2 | 400 | 1.4787 | 0.47 |
| 0.3613 | 3.36 | 420 | 1.5145 | 0.48 |
| 0.4714 | 3.52 | 440 | 1.5547 | 0.46 |
| 0.4698 | 3.68 | 460 | 1.5584 | 0.44 |
| 0.3191 | 3.84 | 480 | 1.5748 | 0.475 |
| 0.4044 | 4.0 | 500 | 1.7353 | 0.44 |
| 0.2275 | 4.16 | 520 | 1.6115 | 0.46 |
| 0.3171 | 4.32 | 540 | 1.6326 | 0.46 |
| 0.2005 | 4.48 | 560 | 1.6569 | 0.47 |
| 0.3165 | 4.64 | 580 | 1.6955 | 0.46 |
| 0.2543 | 4.8 | 600 | 1.6887 | 0.465 |
| 0.186 | 4.96 | 620 | 1.7005 | 0.455 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
tsavage68/UTI2_L3_50steps_1e6rate_05beta_CSFTDPO
|
tsavage68
| 2024-06-10T15:22:34Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/UTI_L3_1000steps_1e5rate_SFT",
"base_model:finetune:tsavage68/UTI_L3_1000steps_1e5rate_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T15:17:27Z |
---
license: llama3
base_model: tsavage68/UTI_L3_1000steps_1e5rate_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: UTI2_L3_50steps_1e6rate_05beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UTI2_L3_50steps_1e6rate_05beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/UTI_L3_1000steps_1e5rate_SFT](https://huggingface.co/tsavage68/UTI_L3_1000steps_1e5rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0261
- Rewards/chosen: 2.3344
- Rewards/rejected: -5.4705
- Rewards/accuracies: 0.9800
- Rewards/margins: 7.8050
- Logps/rejected: -54.2106
- Logps/chosen: -24.5560
- Logits/rejected: -1.1516
- Logits/chosen: -1.1414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5446 | 0.3333 | 25 | 0.2409 | 0.7934 | -0.6030 | 0.9800 | 1.3964 | -44.4754 | -27.6381 | -1.1424 | -1.1365 |
| 0.0009 | 0.6667 | 50 | 0.0261 | 2.3344 | -5.4705 | 0.9800 | 7.8050 | -54.2106 | -24.5560 | -1.1516 | -1.1414 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
|
oneonlee/Llama2-7b-alpaca-Q2-ep1
|
oneonlee
| 2024-06-10T15:18:40Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T15:14:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ka05ar/DeepSeekMath-7B-Ins-Bn_Math_v2
|
ka05ar
| 2024-06-10T15:17:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T15:14:18Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MarPla/SocialMainSectionsPegasusLargeModel
|
MarPla
| 2024-06-10T15:16:43Z | 83 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-large",
"base_model:finetune:google/pegasus-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T15:15:27Z |
---
base_model: google/pegasus-large
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: SocialMainSectionsPegasusLargeModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SocialMainSectionsPegasusLargeModel
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6492
- Rouge1: 44.7384
- Rouge2: 14.7302
- Rougel: 30.3839
- Rougelsum: 40.5448
- Bertscore Precision: 77.1616
- Bertscore Recall: 81.7496
- Bertscore F1: 79.3809
- Bleu: 0.1156
- Gen Len: 190.8850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bertscore Precision | Bertscore Recall | Bertscore F1 | Bleu | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------------------:|:----------------:|:------------:|:------:|:--------:|
| 5.8481 | 0.6661 | 500 | 5.6492 | 44.7384 | 14.7302 | 30.3839 | 40.5448 | 77.1616 | 81.7496 | 79.3809 | 0.1156 | 190.8850 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
JeanM45/0f554762-8b
|
JeanM45
| 2024-06-10T15:16:39Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-06-10T15:15:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
M2LabOrg/whisper-small-de
|
M2LabOrg
| 2024-06-10T15:15:07Z | 92 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"de",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-10T07:08:28Z |
---
language:
- de
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper small de - Michel Mesquita
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: de
split: test
args: 'config: de, split: test'
metrics:
- name: Wer
type: wer
value: 13.91364694035842
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small de - Michel Mesquita
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2354
- Wer: 13.9136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2149 | 0.25 | 1000 | 0.3022 | 17.1577 |
| 0.1874 | 0.5 | 2000 | 0.3181 | 18.8021 |
| 0.1776 | 0.75 | 3000 | 0.2460 | 14.4770 |
| 0.1926 | 1.0 | 4000 | 0.2354 | 13.9136 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
iloncka/exp_5_old_bg-subs_1_v_5_vit_tiny_r_s16_p8_224.augreg_in21k_ft_in1k_ep_60
|
iloncka
| 2024-06-10T15:14:29Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-06-04T13:55:04Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
oneonlee/Llama2-7b-alpaca-Q1-ep3
|
oneonlee
| 2024-06-10T15:13:44Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T15:09:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
baf2b252097d46299a/loss_testing_727ce689a1594b9081c02678b92d46d6
|
baf2b252097d46299a
| 2024-06-10T15:11:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T15:10:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
daniilxcode/ppo-SnowballTarget
|
daniilxcode
| 2024-06-10T15:10:07Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-06-10T15:10:00Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Armageddon1337/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
oneonlee/Llama2-7b-alpaca-Q1-ep2
|
oneonlee
| 2024-06-10T15:08:56Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T15:04:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oneonlee/Llama2-7b-alpaca-Q1-ep1
|
oneonlee
| 2024-06-10T15:03:48Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T14:59:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sharifMunna/munna_bhai_mbbs_model_08_12_2
|
sharifMunna
| 2024-06-10T15:02:35Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:csebuetnlp/banglat5",
"base_model:finetune:csebuetnlp/banglat5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T10:33:37Z |
---
base_model: csebuetnlp/banglat5
tags:
- generated_from_trainer
model-index:
- name: munna_bhai_mbbs_model_08_12_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# munna_bhai_mbbs_model_08_12_2
This model is a fine-tuned version of [csebuetnlp/banglat5](https://huggingface.co/csebuetnlp/banglat5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.15.2
|
Negus/ppo-LunarLander-v2
|
Negus
| 2024-06-10T15:00:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-10T15:00:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 227.25 +/- 46.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kaylaisya/absa
|
kaylaisya
| 2024-06-10T15:00:07Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-10T15:00:07Z |
---
license: apache-2.0
---
|
baf2b252097d46299a/loss_testing_29c64edb08fc49cda6045b42216eb909
|
baf2b252097d46299a
| 2024-06-10T14:59:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T14:59:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DatPySci/mistral7b_rlcd
|
DatPySci
| 2024-06-10T14:59:02Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T14:35:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sreedev11/olympics_prediction_model
|
Sreedev11
| 2024-06-10T14:57:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-06-10T14:48:07Z |
# %%
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report,accuracy_score
from sklearn.model_selection import TimeSeriesSplit,train_test_split
from sklearn.cluster import KMeans
import matplotlib
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report,accuracy_score
from sklearn.naive_bayes import GaussianNB
from sklearn import metrics
from sklearn.svm import LinearSVC
import pylab as pl
from sklearn.ensemble import RandomForestClassifier
import warnings
warnings.filterwarnings('ignore')
df=pd.read_csv("athlete_events.csv")
# %%
df
# %%
df.head()
# %%
df.info()
# %%
df.describe()
# %%
df.dtypes
# %%
df.ndim
# %%
df.shape
# %%
df.isna().sum()
# %%
#DNW:Did Not win , missing values of medal are filled with DNW
df['Medal'].fillna("DNW",inplace=True)
# %%
df_noc=pd.read_csv("noc_regions.csv")
# %%
df_noc
# %%
df_noc=df_noc.drop("notes",axis=1)
# %%
df_noc
# %%
df_noc.rename(columns={"region":"country"},inplace=True)
# %%
df_noc
# %%
df.sample(4)
# %%
#joining both dataset
olympics_merge=df.merge(df_noc,left_on='NOC',right_on='NOC',how='left')
# %%
olympics_merge.sample()
# %%
print(olympics_merge.loc[olympics_merge['country'].isnull(),['NOC', 'Team']].drop_duplicates())
# %%
# Replace missing Teams by the values 1. SGP - Singapore
# 2. ROT - Refugee Olympic Athletes
# 3. UNK - Unknown
# 4. TUV - Tuvalu
#olympics_merge.loc[olympics_merge['Country'].isnull(), ['Country']] = olympics_merge['Team']
# %%
olympics_merge.loc[olympics_merge['country'].isnull(), ['country']] = olympics_merge['Team']
# %%
olympics_merge
# %%
print(olympics_merge.loc[olympics_merge['country'].isnull(),['NOC', 'Team']].drop_duplicates())
# %%
olympics_merge['country'] = np.where(olympics_merge['NOC']=='SGP', 'Singapore', olympics_merge['country'])
olympics_merge['country'] = np.where(olympics_merge['NOC']=='ROT', 'Refugee Olympic Athletes', olympics_merge['country'])
olympics_merge['country'] = np.where(olympics_merge['NOC']=='UNK', 'Unknown', olympics_merge['country'])
olympics_merge['country'] = np.where(olympics_merge['NOC']=='TUV', 'Tuvalu', olympics_merge['country'])
# %%
olympics_merge
# %%
olympics_merge.drop("Team",axis=1,inplace=True)
# %%
olympics_merge.sample()
# %%
olympics_merge.rename(columns={'country':'Team'},inplace=True)
# %%
olympics_merge.head(2)
# %%
print(olympics_merge.loc[olympics_merge['Team'].isnull(),['NOC', 'Team']].drop_duplicates())
# %%
olympics_merge.isnull().sum()
# %%
for i in ["Age","Height","Weight"]:
sns.histplot(olympics_merge[i],kde=True)
plt.show()
# %%
for i in ["Age","Weight",]:
olympics_merge[i]=olympics_merge[i].fillna(olympics_merge[i].mean())
# %%
olympics_merge["Height"]=olympics_merge["Height"].fillna(olympics_merge["Height"].mean())
# %%
olympics_merge.isnull().sum()
# %%
olympics_merge.info()
# %%
olympics_merge['Sex']=np.where(olympics_merge['Sex']=='M',1,0)
# %%
olympics_merge.sample(2)
# %%
olympics_merge["Medal"].unique()
# %%
olympics_merge['Event'].unique()
# %%
olympics_merge['Sport'].unique()
# %%
olympics_merge1=olympics_merge
# %%
olympics_merge1
# %%
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
# %%
olympics_merge1['Medal']=le.fit_transform(olympics_merge1['Medal'])
# %%
olympics_merge1
# %%
olympics_merge1['Medal'].unique()
# %%
summer=olympics_merge1.loc[(olympics_merge1['Year']>1960)&(olympics_merge1['Season']=="Summer"), :]
summer.head(5)
# %%
summer=summer.reset_index()
summer.head(10)
# %%
summer.sample()
# %%
#extracting unique events in a new list
# %%
summerlistunique=summer.Event.unique()
len(summerlistunique)
# %%
summerlistunique
# %%
summer.drop(['Season'],axis=1,inplace=True)
summer.drop(['NOC'],axis=1,inplace=True)
summer.drop(['Games'],axis=1,inplace=True)
summer.drop(['City'],axis=1,inplace=True)
summer.drop(['Year'],axis=1,inplace=True)
summer.drop(['Sport'],axis=1,inplace=True)
summer.drop(['ID'],axis=1,inplace=True)
summer.drop(['Name'],axis=1,inplace=True)
summer.drop(['index'],axis=1,inplace=True)
# %%
summer
# %%
#created a column for encoded team and encoded events in numerical form in original dataset
summer['Team_encode']=le.fit_transform(summer['Team'])
summer['Event_encode']=le.fit_transform(summer['Event'])
# %%
#storing the team names and corresponding encoded numerical values into a new csv file after sorting them according to team name
TeamKeys=summer[['Team','Team_encode']].copy()
TeamKeys.drop_duplicates(subset="Team",inplace=True)
TeamKeys.to_csv("keystoteam.csv")
# %%
TeamKeys.head(4)
# %%
#storing event names and corresponding encoded numerical values into a new csv file after sorting them according to the event name
EventKeys=summer[['Event','Event_encode']].copy()
EventKeys.drop_duplicates(subset="Event",inplace=True)
EventKeys.to_csv("keystoevent.csv")
# %%
EventKeys.head(4)
# %%
summer
# %%
summer.drop(['Event'],axis=1,inplace=True)
summer.drop(['Team'],axis=1,inplace=True)
# %%
summer
# %%
y=summer['Medal']
# %%
y
# %%
x=summer.drop("Medal",axis=1)
# %%
x
# %%
X_train, X_test, Y_train, Y_test = train_test_split(x,y,test_size=0.30, random_state=99)
# %%
x
# %%
y
# %%
X_test
# %%
Y_test
# %%
#ALGORITHM 1 LOGISTIC REGRESSION
# %%
lr=LogisticRegression()
lr.fit(X_train,Y_train)
Y_pred=lr.predict(X_test)
sk_report=classification_report(digits=6,y_true=Y_test,y_pred=Y_pred)
print("Accuracy",round(accuracy_score(Y_pred,Y_test)*100,2))
print(sk_report)
print(pd.crosstab(Y_test,Y_pred,rownames=['Actual'],colnames=['Predicted'],margins=True))
# %%
#ALGORITHM 2 DECESSION TREE
# %%
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree1 = round(decision_tree.score(X_test, Y_test) * 100, 2)
sk_report = classification_report(digits=6, y_true=Y_test, y_pred=Y_pred)
print("Accuracy", acc_decision_tree1)
print(sk_report)
### Confusion Matrix
print(pd.crosstab(Y_test, Y_pred,rownames=['Actual'],colnames=['Predicted'],margins=True))
# %%
#ALGORITHM 3 RANDOM FOREST
# %%
random_forest = RandomForestClassifier(n_estimators=200)
random_forest.fit(X_train,Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_test, Y_test)
acc_random_forest1=round(random_forest.score(X_test, Y_test)*100,2)
k_report = classification_report(
digits=6,
y_true=Y_test,
y_pred=Y_pred)
print("Accuracy" , acc_random_forest1)
print(sk_report)
pd.crosstab(Y_test, Y_pred,rownames=['Actual'],colnames=['Predicted'],margins=True)
# %%
x.sample(5)
# %%
y.sample(5)
# %%
summer.sample(4)
# %%
random_forest.predict([[1,19.0,173.0,70.0,87,163]])
# %%
import pickle
from joblib import dump,load
dump(random_forest,'olympics_model.pkl')
model_file = open(r"Projects\Olympics\olympics_model1.pkl","wb")
pickle.dump(random_forest,model_file)
|
sounana/openai-whisper-large-v2-Lastversion
|
sounana
| 2024-06-10T14:56:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T14:56:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cs552-mlp/phi3-mcq
|
cs552-mlp
| 2024-06-10T14:56:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"region:us"
] | null | 2024-06-10T14:55:51Z |
---
library_name: peft
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
chandrasekhar319/QA_Finetune_BioMistral
|
chandrasekhar319
| 2024-06-10T14:56:25Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:BioMistral/BioMistral-7B",
"base_model:adapter:BioMistral/BioMistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2024-06-10T14:56:12Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: BioMistral/BioMistral-7B
model-index:
- name: QA_Finetune_BioMistral
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_Finetune_BioMistral
This model is a fine-tuned version of [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9391 | 0.0619 | 200 | 0.9290 |
| 0.9056 | 0.1238 | 400 | 0.9171 |
| 0.8717 | 0.1858 | 600 | 0.9138 |
| 0.822 | 0.2477 | 800 | 0.9174 |
| 0.8539 | 0.3096 | 1000 | 0.9075 |
| 0.8902 | 0.3715 | 1200 | 0.9061 |
| 0.936 | 0.4334 | 1400 | 0.9088 |
| 0.8572 | 0.4954 | 1600 | 0.8990 |
| 0.8669 | 0.5573 | 1800 | 0.8933 |
| 0.875 | 0.6192 | 2000 | 0.8868 |
| 0.8369 | 0.6811 | 2200 | 0.8801 |
| 0.8445 | 0.7430 | 2400 | 0.8772 |
| 0.8316 | 0.8050 | 2600 | 0.8692 |
| 0.8573 | 0.8669 | 2800 | 0.8614 |
| 0.8104 | 0.9288 | 3000 | 0.8542 |
| 0.8182 | 0.9907 | 3200 | 0.8488 |
| 0.5912 | 1.0526 | 3400 | 0.8659 |
| 0.5579 | 1.1146 | 3600 | 0.8557 |
| 0.5834 | 1.1765 | 3800 | 0.8608 |
| 0.547 | 1.2384 | 4000 | 0.8514 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
dododo1234/squid_model_part_3
|
dododo1234
| 2024-06-10T14:54:34Z | 219 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-09T16:05:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
iloncka/exp_5_old_bg-subs_1_v_5_convnext_nano.in12k_ft_in1k_ep_60
|
iloncka
| 2024-06-10T14:52:00Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-06-10T14:50:09Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
yemen2016/memo3_ND
|
yemen2016
| 2024-06-10T14:50:27Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:MiMe-MeMo/MeMo-BERT-03",
"base_model:finetune:MiMe-MeMo/MeMo-BERT-03",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T09:38:24Z |
---
base_model: MiMe-MeMo/MeMo-BERT-03
tags:
- generated_from_trainer
model-index:
- name: memo3_ND
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# memo3_ND
This model is a fine-tuned version of [MiMe-MeMo/MeMo-BERT-03](https://huggingface.co/MiMe-MeMo/MeMo-BERT-03) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5131
- F1-score: 0.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1986 | 1.0 | 1177 | 0.4906 | 0.8009 |
| 0.187 | 2.0 | 2354 | 0.5147 | 0.7915 |
| 0.172 | 3.0 | 3531 | 0.5131 | 0.8634 |
| 0.1868 | 4.0 | 4708 | 0.5832 | 0.7950 |
| 0.1653 | 5.0 | 5885 | 0.5538 | 0.8530 |
| 0.1585 | 6.0 | 7062 | 0.4072 | 0.8625 |
| 0.1637 | 7.0 | 8239 | 0.5193 | 0.8417 |
| 0.1514 | 8.0 | 9416 | 0.5005 | 0.8508 |
| 0.151 | 9.0 | 10593 | 0.5406 | 0.8521 |
| 0.159 | 10.0 | 11770 | 0.5156 | 0.8533 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Disty0/sotediffusion-wuerstchen3-decoder
|
Disty0
| 2024-06-10T14:46:56Z | 497 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:other",
"diffusers:StableCascadeDecoderPipeline",
"region:us"
] |
text-to-image
| 2024-06-10T13:46:16Z |
---
pipeline_tag: text-to-image
license: other
license_name: faipl-1.0-sd
license_link: LICENSE
prior:
- Disty0/sotediffusion-wuerstchen3
---
# SoteDiffusion Wuerstchen3
Anime finetune of Würstchen V3.
# Usage
Please refer to the main model: https://huggingface.co/Disty0/sotediffusion-wuerstchen3
## Dataset
Trained with 512K images.
## Training:
**GPU used for training**: 1x AMD RX 7900 XTX 24GB
**GPU Hours**: 100
**Software used**: https://github.com/2kpr/StableCascade
### Config:
```
experiment_id: sotediffusion-wr3_3b-stage_b-alpha3
model_version: 3B
dtype: bfloat16
use_fsdp: False
batch_size: 16
grad_accum_steps: 16
updates: 102400
backup_every: 2048
save_every: 1024
warmup_updates: 128
lr: 1.0e-5
optimizer_type: Adafactor
adaptive_loss_weight: False
stochastic_rounding: True
image_size: 768
multi_aspect_ratio: [1/1, 1/2, 1/3, 2/3, 3/4, 1/5, 2/5, 3/5, 4/5, 1/6, 5/6, 9/16]
shift: 4
checkpoint_path: /mnt/DataSSD/AI/SoteDiffusion/Wuerstchen3/
output_path: /mnt/DataSSD/AI/SoteDiffusion/Wuerstchen3/
webdataset_path: file:/mnt/DataSSD/AI/anime_image_dataset/best/newest_best.tar
effnet_checkpoint_path: /mnt/DataSSD/AI/models/wuerstchen3/effnet_encoder.safetensors
stage_a_checkpoint_path: /mnt/DataSSD/AI/models/wuerstchen3/stage_a.safetensors
generator_checkpoint_path: /mnt/DataSSD/AI/SoteDiffusion/Wuerstchen3/generator_4k-016384.safetensors
```
## Limitations and Bias
### Bias
- This model is intended for anime illustrations.
Realistic capabilites are not tested at all.
### Limitations
- Far shot eyes are can bad.
## License
SoteDiffusion models falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license, which is compatible with Stable Diffusion models’ license. Key points:
1. **Modification Sharing:** If you modify SoteDiffusion models, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
**Notes**: Anything not covered by Fair AI license is inherited from Stability AI Non-Commercial license which is named as LICENSE_INHERIT.
|
AdamRTomkins/test_upload
|
AdamRTomkins
| 2024-06-10T14:44:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-06-10T11:37:51Z |
---
license: mit
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: test_upload
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adam_beta2: 0.95
adam_epsilon: 1.0e-05
adapter: qlora
base_model: microsoft/phi-1_5
dataset_prepared_path: null
datasets:
- path: garage-bAInd/Open-Platypus
type: alpaca
debug: null
deepspeed: null
early_stopping_patience: null
evals_per_epoch: 1
flash_attention: true
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
hub_model_id: AdamRTomkins/test_upload
hub_strategy: end
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 2
micro_batch_size: 1
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: ./outputs/phi-sft-out
pad_to_sequence_len: true
resize_token_embeddings_to_32x: true
resume_from_checkpoint: null
sample_packing: true
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tokenizer_type: AutoTokenizer
val_set_size: 0.05
wandb_entity: null
wandb_log_model: null
wandb_name: null
wandb_project: null
wandb_watch: null
warmup_steps: 100
weight_decay: 0.1
xformers_attention: null
```
</details><br>
# test_upload
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6676 | 0.0002 | 2 | 1.3469 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
harveybro/molt5-augmented-default-200-large-caption2smiles
|
harveybro
| 2024-06-10T14:43:22Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T14:41:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cgihlstorf/finetuned_pythia70M_nondeduped_cp_14300016_1_0.0001_sequential
|
cgihlstorf
| 2024-06-10T14:42:55Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"region:us"
] | null | 2024-06-10T14:42:24Z |
---
library_name: peft
base_model: EleutherAI/pythia-70m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
ammarnasr/MedTS-4-base
|
ammarnasr
| 2024-06-10T14:42:00Z | 188 | 0 |
transformers
|
[
"transformers",
"safetensors",
"MedTS",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-06-10T14:23:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rrricharrrd/dqn-SpaceInvadersNoFrameskip-v4
|
rrricharrrd
| 2024-06-10T14:40:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-10T14:39:42Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 535.50 +/- 166.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rrricharrrd -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rrricharrrd -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rrricharrrd
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Amber/spider-large-pretrain-2019
|
Amber
| 2024-06-10T14:34:58Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"dpr",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T14:18:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QingchuanMa/CartPole-v1
|
QingchuanMa
| 2024-06-10T14:34:26Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-10T06:57:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 964.40 +/- 63.11
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
baf2b252097d46299a/loss_testing_256af9bec4934ab8ba893fc6c4eb68a3
|
baf2b252097d46299a
| 2024-06-10T14:33:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T14:32:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DBangshu/Base_New_GPT2_5
|
DBangshu
| 2024-06-10T14:33:01Z | 200 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T14:32:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saransh03sharma/mintrec-llama-3-8b-150-5-shot
|
saransh03sharma
| 2024-06-10T14:31:13Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T14:22:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saransh03sharma/mintrec-llama-3-8b-150-10-shot
|
saransh03sharma
| 2024-06-10T14:30:40Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T14:22:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
harveybro/molt5-augmented-default-1300-base-caption2smiles
|
harveybro
| 2024-06-10T14:30:16Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T14:29:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
datek/Qwen-Qwen1.5-1.8B-1718029671
|
datek
| 2024-06-10T14:30:15Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T14:28:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
baf2b252097d46299a/loss-testing_6c35ae7cde2544729eee0e79e1327e09
|
baf2b252097d46299a
| 2024-06-10T14:28:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T14:28:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
awlassche/results
|
awlassche
| 2024-06-10T14:24:42Z | 185 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:emanjavacas/GysBERT",
"base_model:finetune:emanjavacas/GysBERT",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-07T13:58:30Z |
---
license: mit
base_model: emanjavacas/GysBERT
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [emanjavacas/GysBERT](https://huggingface.co/emanjavacas/GysBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6432
- Accuracy: 0.6773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.7039 | 0.9091 | 100 | 0.6635 | 0.5977 |
| 0.6269 | 1.8182 | 200 | 0.6418 | 0.65 |
| 0.5328 | 2.7273 | 300 | 0.6432 | 0.6773 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
Hevagog/tqc-PandaPickAndPlace-v3
|
Hevagog
| 2024-06-10T14:23:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-06T20:44:09Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -24.60 +/- 21.05
name: mean_reward
verified: false
---
# **TQC** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **TQC** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
phamkinhquoc2002/bge-base-financial-matryoshka_test
|
phamkinhquoc2002
| 2024-06-10T14:23:21Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:29132",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"dataset_size:100",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-10T14:22:39Z |
---
language: []
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:29132
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
- dataset_size:100
base_model: BAAI/bge-base-en-v1.5
datasets: []
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
widget:
- source_sentence: "[Click Here]( https://www.sec.gov/oiea/Complaint.html ) to file\
\ a complaint with the SEC.\n\n[Click Here]( https://www.finra.org/investors/have-problem/file-complaint/complaint-center\
\ ) to file a complaint wit with FINRA.\n\n[Click Here]( https://robinhood.com/contact\
\ ) to file a complaint with Robinhood directly.\n\nRobinhood Financial LLC 85\
\ Willow Road Menlo Park, CA 94025 United States\n\nThis morning I, and millions\
\ of other retail investors, were blocked from purchasing (entering new buy orders)\
\ on the Robinhood platform, without notice. This clear example of market manipulation\
\ has forced the stock down from over $500 in after-hours to less than $300 as\
\ of this writing. Meanwhile, hedge fund interests are NOT blocked from buying\
\ the shares being traded and the lower price obviously benefits them.\n\nWe retail\
\ investors have followed all the rules and finally stood to gain a LITTLE bit\
\ from Wall St and they suddenly change the rules \"to protect\" us. I am requesting\
\ you use your subpoena power and regulatory authority to examine whether Robinhood\
\ colluded illegally with any other actors who may have held short positions on\
\ these stocks to reduce the number of buyers for $GME and therefore deflate the\
\ price. This is market manipulation.\n\ninfo for form:\nRobinhood Financial LLC\n\
\nAddress: \n85 Willow Road\nMenlo Park, CA 94025\nUnited States\n\nEdit: Fellow\
\ Regarded, please buy more GME and Hold \U0001F48E\U0001F91A\U0001F3FE the rewards\
\ helps with visibility but it’s better spent there.\n\nEdit 2: \nI’m getting\
\ a lot of questions regarding the same things so I’ll try my best to answer them.\
\ \n- For FINRA online complaint, scroll down to the section reading “Problems\
\ addressed by FINRA” under it click the Orange button that reads “FILE ONLINE\
\ COMPLAINT” \n- FINRA CRD NUMBER: 165998 [ Thanks u/Mattcwh ]\n- User [R] pointed\
\ out that Robinhood is owned by Citedal, a hedge fund that along side with Point72\
\ injected ~$3B into Melvin. Standing to lose a shitton of money to us degenerates.\
\ This further points out why Robinhood is trying to manipulate the market to\
\ help out the suits at Wall St. \n- Lots of questions concerning the Security\
\ type. If it’s for GME you put it under CLASS A OR D Securities.\n\nEdit 3: Thanks\
\ for all the people who filed what they could."
sentences:
- "\\*\\*Official freestyle = online!\\*\\* \U0001F3C4\n\n\\*\\*Flyysoulja and KodiyakRedd\
\ just released a freestyle, we viral my dawgs\\*\\*\n\nWe need all dem island\
\ boys to share this around, we having a tropical winter full of gains this year\
\ \U0001F3C4♂️\n\n[https://www.tiktok.com/foryou?is\\\\_copy\\\\_url=1&is\\\\\
_from\\\\_webapp=v1&item\\\\_id=7025992072108854534&lang=en#/@flyysouljah/video/7025992072108854534](https://www.tiktok.com/foryou?is\\\
_copy\\_url=1&is\\_from\\_webapp=v1&item\\_id=7025992072108854534&lang=en#/@flyysouljah/video/7025992072108854534)\
\ \n\n\n\U0001F4E1\\*\\*Website\\*\\*: https://islanddoges.io \n\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_ \n\U0001F3DD\U0001F415 \\*\\*ISLAND DOGES\\*\\*\U0001F3DD\U0001F415\
\ \n\nayy islain dogges to the moooon... to the moon.. to the moon..\n\n\\*\\\
*ABOUT ISLAND DOGES\\*\\*\n\n$ISLAND is an ERC20 Token with absolute meme potential.\
\ While other projects rushing things to grab money in a fast way without delivering\
\ much, ISLAND DOGES comes around with a fresh-as-fuck-website, strong and smart\
\ marketing and the plan to develop the $ISLAND ecosystem by doing an absolute\
\ sick NFT drop. \n\n$ISLAND is backed by tha Islaaaaand Boyyyys (shout out to\
\ our boys - Flyysoulja and Kodiyakredd) and is planning to become one of the\
\ strongest communities of any meme coin out there.\n\n\U0001F3DD No fucking \"\
dev tokens\" (miss me with that bullshiiit) \n\U0001F3DD No \"whitelist\" where\
\ friend and family can join before you can \n\U0001F3DD No presale (and no fucking\
\ dumps on your head my dawg) \n\U0001F3DD \\*\\*100% fair launch\\*\\*\n\n>\\\
*\\*Ticker\\*\\*: $ISLAND (ETH) \n\\*\\*Tax\\*\\*: \n3% Marketing \n3% Development\
\ \n1% Redistribution\n\n[Website](https://islanddoges.io) \\*\\*|\\*\\* [Telegram](https://t.me/IslandDoges)\
\ | [Twitter](https://twitter.com/islanddoges) |[Chart](https://www.dextools.io/app/ether/pair-explorer/0x89af5a68cfa436693b1797cc2a41715f4530fa61)\
\ \n\U0001F984 \\*\\*Buy on UniSwap\\*\\*: https://app.uniswap.org/#/swap?outputCurrency=0xa0dc5132c91ea4d94fcf1727c32cc5a303b34cfc"
- "So I just recently sold my first property and turned a good profit after almost\
\ 6 years of owning it. This was my first time selling a property so I relied\
\ on the realtor for a lot of advice. When he first saw the property ahead of\
\ listing, he recommended a price of $210k and I agreed since it was in the range\
\ of comps that had sold around that time, a month ago. I ultimately ended up\
\ accepting a full cash offer for $203k with a waived inspection, at his recommendation\
\ before it even hit the open market. This was still above the price of most of\
\ the condos that had sold during that time. Well, we just closed last week and\
\ 3 days later it shows up on all the platforms listed for sale at $240k. He’s\
\ listed as the realtor representing the property. For background this is a 1/1\
\ condo on the water in south Florida.\n\nAm I an idiot for not listing it for\
\ more or is this just the result of a crazy and fast moving market? Did my realtor\
\ just pull a shady move? \n\nEdit: I appreciate all the responses. Spoke to a\
\ trusted realtor/broker I know in another part of the state and they agreed this\
\ was extremely unethical. Also did some digging into the documents and saw that\
\ that The name was changed a couple of weeks before closing to a corporation\
\ who’s address is in the same building as the realtor, so the case for shadiness\
\ is mounting.\n\nTLDR: Sold my condo for $203k at the recommendation of my realtor.\
\ 3 days after closing it’s listed again for $240k. Was I just duped?"
- LEAVE ROBINHOOD. They dont deserve to make money off us after the millions they
caused in losses. It might take a couple of days, but send Robinhood to the ground
and GME to the moon.
- source_sentence: "Chapman Albin is an investors rights firm that my buddy works\
\ at. Just got off the phone w him. He is going to post a press release regarding\
\ the case they are filing. \nLet me know if you need help finding a lawyer. \n\
Disclaimer: I’m not getting anything out of this"
sentences:
- 'I''m been in tech for 20+ years. I''ve picked good companies to work for, and
shitty ones. I''ve made decent money and gone years where hours worked took me
away from my friends, family and my sanity.
Recently I''ve come to the conclusion I''m too old to put up with any more BS
from execs, staff and VCs. Everyone expects slave hours (i guess its because they
buy your time and brain). Tech is really a young persons game and I am out of
gas. I can''t keep up anymore with 12-15 hour days, 6 days a week.
Over thr past year I''ve been manually backtesting and aa well dabbling in daytrading
with live cash (upwards of $35k per trade). I''ve made a bit, lost a bit but I''m
up overall. I really love it to be honest.
I''m not looking to jump right in as my day job but I''m curious to hear your
stories on how you segued into making day trading your career.
Info on me: mid-40s, spouse is professional, 2 school aged kids. Me (Eng leader
at tech startup) living in Canada. Have $80k in cash available. On TD Direct Investing
trading only on CDN exchanges for now.'
- "\\*\\*TL;DR- The DTC has been taken over by big money. They transitioned from\
\ a manual to a computerized ledger system in the 80s, and it played a significant\
\ role in the 1987 market crash. In 2003, several issuers with the DTC wanted\
\ to remove their securities from the DTC's deposit account because the DTC's\
\ participants were naked short selling their securities. Turns out, they were\
\ right. The DTC and it's participants have created a market-sized naked short\
\ selling scheme. All of this is made possible by the DTC's enrollee- Cede & Co.\\\
*\\*\n\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\n\n​\n\n[\\*\\*Andrew MoMoney - Live Coverage\\*\\*](https://youtu.be/zKzRDpBBFLQ)\n\
\n\\*\\*I hit the image limit in this DD. Given this, and the fact that there's\
\ already SO MUCH info in this DD, I've decided to break it into AT LEAST 2 posts.\
\ So stay tuned.\\*\\*\n\n\\*\\*Previous DD\\*\\*\n\n[1. Citadel Has No Clothes](https://www.reddit.com/r/GME/comments/m4c0p4/citadel\\\
_has\\_no\\_clothes/)\n\n[2. BlackRock Bagholders, INC.](https://www.reddit.com/r/GME/comments/m7o7iy/blackrock\\\
_bagholders\\_inc/)\n\n[3. The EVERYTHING Short](https://www.reddit.com/r/GME/comments/mgucv2/the\\\
_everything\\_short/)\n\n[4. Walkin' like a duck. Talkin' like a duck](https://www.reddit.com/r/Superstonk/comments/ml48ov/walkin\\\
_like\\_a\\_duck\\_talkin\\_like\\_a\\_duck/)\n\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\n\\*Holy SH\\\\*T!\\*\n\nThe\
\ events we are living through \\*RIGHT NOW\\* are the 50-year ripple effects\
\ of stock market evolution. From the birth of the DTC to the cesspool we currently\
\ find ourselves in, this DD will illustrate just how fragile the \\*House of\
\ Cards\\* has become.\n\nWe've been warned so many times... We've made the same\
\ mistakes \\*so. many. times.\\*\n\n\\*\\*And we never seem to learn from them..\\\
*\\*\n\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\n\nIn case you've been living under a rock for the past few months, the\
\ DTCC has been proposing a boat load of rule changes to help better-monitor their\
\ participants' exposure. If you don't already know, the DTCC stands for Depository\
\ Trust & Clearing Corporation and is broken into the following (primary) subsidiaries:\n\
\n1. \\*\\*Depository Trust Company (DTC)\\*\\* \\- \\*centralized clearing agency\
\ that makes sure grandma gets her stonks and the broker receives grandma's tendies\\\
*\n2. \\*\\*National Securities Clearing Corporation (NSCC)\\*\\* \\- \\*provides\
\ clearing, settlement, risk management, and central counterparty (CCP) services\
\ to its members for broker-to-broker trades\\*\n3. \\*\\*Fixed Income Clearing\
\ Corporation (FICC)\\*\\* \\- \\*provides central counterparty (CCP) services\
\ to members that participate in the US government and mortgage-backed securities\
\ markets\\*\n\n\\*Brief\\* \\*history\\* \\*lesson: I promise it's relevant (this\\\
* [\\*link\\*](https://www.dtcc.com/annuals/museum/index.html) \\*provides all\
\ the info that follows).\\*\n\nThe DTC was created in 1973. It stemmed from the\
\ need for a centralized clearing company. Trading during the 60s went through\
\ the roof and resulted in many brokers having to quit before the day was finished\
\ so they could manually record their mountain of transactions. All of this was\
\ done on paper and each share certificate was physically delivered. This obviously\
\ resulted in many failures to deliver (FTD) due to the risk of human error in\
\ record keeping. In 1974, the Continuous Net Settlement system was launched to\
\ clear and settle trades using a rudimentary internet platform.\n\nIn 1982, the\
\ DTC started using a [Book-Entry Only](https://www.investopedia.com/terms/b/bookentrysecurities.asp)\
\ (BEO) system to underwrite bonds. For the first time, there were no physical\
\ certificates that actually traded hands. Everything was now performed virtually\
\ through computers. Although this was advantageous for many reasons, it made\
\ it MUCH easier to commit a certain type of securities fraud- naked shorting.\n\
\nOne year later they adopted [NYSE Rule 387](https://www.finra.org/rules-guidance/rulebooks/retired-rules/rule-387)\
\ which meant most securities transactions had to be completed using this new\
\ BEO computer system. Needless to say, explosive growth took place for the next\
\ 5 years. Pretty soon, other securities started utilizing the BEO system. It\
\ paved the way for growth in mutual funds and government securities, and even\
\ allowed for same-day settlement. At the time, the BEO system was a tremendous\
\ achievement. However, we were destined to hit a brick wall after that much growth\
\ in such a short time.. By October 1987, that's exactly what happened.\n\n\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\n[\\\
*\"A number of explanations have been offered as to the cause of the crash...\
\ Among these are computer trading, derivative securities, illiquidity, trade\
\ and budget deficits, and overvaluation..\"\\*](https://historynewsnetwork.org/article/895)\\\
*.\\*\n\nIf you're wondering where the birthplace of High Frequency Trading (HFT)\
\ came from, look no further. The same machines that automated the exhaustively\
\ manual reconciliation process were also to blame for amplifying the fire sale\
\ of 1987.\n\n[https:\\/\\/historynewsnetwork.org\\/article\\/895](https://preview.redd.it/3l08f1ud6bu61.png?width=810&format=png&auto=webp&s=2331f409fb4f60b3d62e475c58cf44211b4122a3)\n\
\nThe last sentence indicates a much more pervasive issue was at play, here. The\
\ fact that we still have trouble explaining the calculus is even more alarming.\
\ The effects were so pervasive that it was dubbed the [1st global financial crisis](https://www.federalreservehistory.org/essays/stock-market-crash-of-1987)\n\
\nHere's another great summary published by the [NY Times](https://www.nytimes.com/2012/10/19/business/a-computer-lesson-from-1987-still-unlearned-by-wall-street.html):\
\ \\\\*\"..\\\\*\\*\\*\\*to be fair to the computers.. \\[they were\\].. programmed\
\ by fallible people and trusted by people who did not understand the computer\
\ programs' limitations. As computers came in, human judgement went out.\"\\*\\\
*\\* Damned if that didn't give me goosiebumps... \\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\nHere's an EXTREMELY relevant\
\ [explanation](https://historynewsnetwork.org/article/895) from [Bruce Bartlett](https://www.creators.com/author/bruce-bartlett)\
\ on the role of derivatives:\n\nhttps://preview.redd.it/tu88v96vqau61.png?width=805&format=png&auto=webp&s=6e69760997379cb404163cfc6a11b411adbaa344\n\
\nNotice the last sentence? A major factor behind the crash was a disconnect between\
\ the price of stock and their corresponding derivatives. The value of any given\
\ stock should determine the derivative value of that stock. It shouldn't be the\
\ other way around. \\*\\*This is an important concept to remember as it will\
\ be referenced throughout the post.\\*\\*\n\nIn the off chance that the market\
\ DID tank, they hoped they could contain their losses with [portfolio insurance](https://www.investopedia.com/terms/p/portfolioinsurance.asp#:~:text=Portfolio%20insurance%20is%20a%20hedging,also%20refer%20to%20brokerage%20insurance)\\\
*.\\* Another [article from the NY times](https://www.nytimes.com/2012/10/19/business/a-computer-lesson-from-1987-still-unlearned-by-wall-street.html)\
\ explains this in better detail. \\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\n​\n\nhttps://preview.redd.it/rf6ocoe9abu61.png?width=629&format=png&auto=webp&s=e638c4479aceac77a003ae86fa1cfdd23f5406b8\n\
\nhttps://preview.redd.it/8igwi6mflbu61.png?width=612&format=png&auto=webp&s=853945852aea5a355266bf52b6f1fa573db1e29a\n\
\nhttps://preview.redd.it/fe78gr1qlbu61.png?width=608&format=png&auto=webp&s=4ec59987333e04cef07541229161b3ff30881444\n\
\nA major disconnect occurred when these futures contracts were used to intentionally\
\ tank the value of the underlying stock. In a perfect world, organic growth would\
\ lead to an increase in value of the company (underlying stock). They could do\
\ this by selling more products, creating new technologies, breaking into new\
\ markets, etc. This would trigger an organic change in the derivative's value\
\ because investors would be (hopefully) more optimistic about the longevity of\
\ the company. It could go either way, but the point is still the same. This is\
\ the type of investing that most of us are familiar with: investing for a better\
\ future.\n\nI don't want to spend too much time on the crash of 1987. I just\
\ want to identify the factors that contributed to the crash and the role of the\
\ DTC as they transitioned from a manual to an automatic ledger system. \\*\\\
*The connection I really want to focus on is the ENORMOUS risk appetite these\
\ investors had. Think of how overconfident and greedy they must have been to\
\ put that much faith in a computer script.. either way, same problems still exist\
\ today.\\*\\*\n\nFinally, the comment by Bruce Bartlett regarding the mismatched\
\ investment strategies between stocks and options is crucial in painting the\
\ picture of today's market.\n\nNow, let's do a super brief walkthrough of the\
\ main parties within the DTC before opening this \\*\\*can of worms.\\*\\*\n\n\
\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\n\
I'm going to talk about three groups within the DTC- \\*\\*issuers, participants,\
\ and Cede & Co.\\*\\*\n\nIssuers are companies that issue securities (stocks),\
\ while participants are the clearing houses, brokers, and other financial institutions\
\ that can utilize those securities. Cede & Co. is a subsidiary of the DTC which\
\ holds the share certificates.\n\nParticipants have MUCH more control over the\
\ securities that are deposited from the issuer. Even though the issuer created\
\ those shares, participants are in control when those shares hit the DTC's doorstep.\
\ The DTC transfers those shares to a holding account \\*(Cede & Co.)\\* and the\
\ participant just has to ask \"\\*May I haff some pwetty pwease wiff sugar on\
\ top?\"\\* \\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\n\n\\*\\*Now, where's that can of worms?\\*\\*\n\nEverything was relatively\
\ calm after the crash of 1987.... until we hit 2003..\n\n\\*\\\\*deep breath\\\
\\*\\*\n\nThe DTC started receiving several requests from issuers to pull their\
\ securities from the DTC's depository. I don't think the DTC was prepared for\
\ this because they didn't have a written policy to address it, let alone an official\
\ rule. Here's the half-assed response from the DTC:\n\n[https:\\/\\/www.sec.gov\\\
/rules\\/sro\\/34-47978.htm \\(section II\\)](https://preview.redd.it/1ctpj263zdu61.png?width=788&format=png&auto=webp&s=6ff2e2d543f53a6ece6d95c334ed995fe67f9c8d)\n\
\nRealizing this situation was heating up, the DTC proposed [SR-DTC-2003-02](https://www.sec.gov/rules/sro/34-47978.htm#P19\\\
_6635)..\n\n[https:\\/\\/www.sec.gov\\/rules\\/sro\\/34-47978.htm#P19\\\\_6635](https://preview.redd.it/io22id3n7eu61.png?width=774&format=png&auto=webp&s=424ef5b6a70d073c62a47f6a1b82cd739b527b88)\n\
\nHonestly, they were better of WITHOUT the new proposal.\n\nIt became an even\
\ BIGGER deal when word got about the proposed rule change. Naturally, it triggered\
\ a TSUNAMI of comment letters against the DTC's proposal. There was obviously\
\ something going on to cause that level of concern. Why did \\*SO MANY\\* issuers\
\ want their deposits back?\n\n\\*\\*...you ready for this sh\\\\*t?\\*\\*\n\n\
\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\n\
As outlined in the DTC's opening remarks:\n\n[https:\\/\\/www.sec.gov\\/rules\\\
/sro\\/34-47978.htm#P19\\\\_6635](https://preview.redd.it/eq9q8mcubeu61.png?width=1028&format=png&auto=webp&s=eee6231336e398b0d53299a2a7639fdfd333af8c)\n\
\n\\*OK... see footnote 4.....\\*\n\n[https:\\/\\/www.sec.gov\\/rules\\/sro\\\
/34-47978.htm#P19\\\\_6635](https://preview.redd.it/v884rfqwbeu61.png?width=1053&format=png&auto=webp&s=6fe5db76c9c6fd5e596bbe3c3c64bc6feb64fd97)\n\
\n\\*\\*UHHHHHHH WHAT!??!\\*\\* Yeah! I'd be pretty pissed, too! Have my shares\
\ deposited in a clearing company to take advantage of their computerized trades\
\ just to get kicked to the curb with NO WAY of getting my securities back...\
\ AND THEN find out that the big-d\\\\*ck \"participants\" at your fancy DTC party\
\ are literally short selling my shares without me knowing....?!\n\n....This sound\
\ familiar, anyone??? IDK about y'all, but this \"trust us with your shares\"\
\ BS is starting to sound like a major con.\n\nThe DTC asked for feedback from\
\ all issuers and participants to gather a consensus before making a decision.\
\ All together, the DTC received 89 comment letters (a pretty big response). 47\
\ of those letters opposed the rule change, while 35 were in favor.\n\n\\*To save\
\ space, I'm going to use smaller screenshots. Here are just a few of the opposition\
\ comments..\\*\n\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\n\n[https:\\/\\/www.sec.gov\\/rules\\/sro\\/dtc200302\\/srdtc200302-89.pdf](https://preview.redd.it/ds068omndeu61.png?width=894&format=png&auto=webp&s=7958cbf3fde10e1bbb81c6adeb87f2bfc5dc8fde)\n\
\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\
\n​\n\n\\*\\*And another:\\*\\*\n\n​\n\n[https:\\/\\/www.sec.gov\\\
/rules\\/sro\\/dtc200302\\/rsrondeau052003.txt](https://preview.redd.it/953v7l47feu61.png?width=884&format=png&auto=webp&s=83c2d1998b3c111da7cb31b183b83c62abbe353b)\n\
\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\
\n​\n\n\\*\\*AAAAAAAAAAND another:\\*\\*\n\n​\n\n[https:\\/\\/www.sec.gov\\\
/rules\\/sro\\/dtc200302\\/msondow040403.txt](https://preview.redd.it/pkifz41sqeu61.png?width=804&format=png&auto=webp&s=733a219050239012a2b6b29c1985bdbd1df60303)\n\
\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\
\n\\*\\*\\*Here are a few in favor\\*\\*\\*\\\\*..\\\\*\n\n\\*All of the comments\
\ I checked were participants and classified as market makers and other major\
\ financial institutions... go f\\\\*cking figure.\\*\n\n[https:\\/\\/www.sec.gov\\\
/rules\\/sro\\/dtc200302\\/srdtc200302-82.pdf](https://preview.redd.it/myk7675zseu61.png?width=617&format=png&auto=webp&s=94c622511fc3392bacca6f1c34375920612bc9bb)\n\
\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\
\n​\n\n\\*\\*Two\\*\\*\n\n​\n\n[https:\\/\\/www.sec.gov\\/rules\\\
/sro\\/dtc200302\\/srdtc200302-81.pdf](https://preview.redd.it/ouwx18qmteu61.png?width=692&format=png&auto=webp&s=39dcaabcc228e60ba5e472353285aa330c13ea0a)\n\
\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\
\n​\n\n\\*\\*Three\\*\\*\n\n​\n\n[https:\\/\\/www.sec.gov\\/rules\\\
/sro\\/dtc200302\\/rbcdain042303.pdf](https://preview.redd.it/xpzt606pueu61.png?width=600&format=png&auto=webp&s=79685c694f661b9c7d03093a8908eebe6cad421e)\n\
\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\
\nHere's the [full list](https://www.sec.gov/rules/sro/dtc200302.shtml) if you\
\ wanna dig on your own.\n\n...I realize there are advantages to \"paperless\"\
\ securities transfers... However... It is EXACTLY what Michael Sondow said in\
\ his comment letter above.. \\*\\*\\*We simply cannot trust the DTC to protect\
\ our interests when we don't have physical control of our assets\\*\\*\\*\\\\\
*\\\\*.\\\\*\\\\*\n\nSeveral other participants, including \\*\\*Edward Jones,\
\ Ameritrade, Citibank,\\*\\* and \\*\\*Prudential\\*\\* overwhelmingly favored\
\ this proposal.. How can someone NOT acknowledge that the absence of physical\
\ shares only makes it easier for these people to manipulate the market....?\n\
\nThis rule change would allow these 'participants' to continue doing this because\
\ it's extremely profitable to sell shares that don't exist, or have not been\
\ collateralized. Furthermore, it's a win-win for them because it forces issuers\
\ to keep their deposits in the holding account of the DTC...\n\nEver heard of\
\ the [fractional reserve banking system](https://www.investopedia.com/terms/f/fractionalreservebanking.asp#:~:text=Fractional%20reserve%20banking%20is%20a,by%20freeing%20capital%20for%20lending)??\
\ Sounds A LOT like what the stock market has just become.\n\nWant proof of market\
\ manipulation? Let's fact-check the claims from the opposition letters above.\
\ \\*I'm only reporting a few for the time period we discussed (2003ish). This\
\ is just to validate their claims that some sketchy sh\\\\*t is going on.\\*\n\
\n1. [\\*\\*UBS Securities\\*\\*](https://files.brokercheck.finra.org/firm/firm\\\
_7654.pdf) \\*\\*(formerly UBS Warburg):\\*\\*\n 1. pg 559; SHORT SALE VIOLATION;\
\ 3/30/1999\n 2. pg 535; OVER REPORTING OF SHORT INTEREST POSITIONS; 5/1/1999\
\ - 12/31/1999\n 3. PG 533; FAILURE TO REPORT SHORT SALE INDICATORS;INCORRECTLY\
\ REPORTING LONG SALE TRANSACTIONS AS SHORT SALES; 7/2/2002\n2. [\\*\\*Merrill\
\ Lynch\\*\\*](https://files.brokercheck.finra.org/firm/firm\\_16139.pdf) \\*\\\
*(Professional Clearing Corp.):\\*\\*\n 1. pg 158; VIOLATION OF SHORT INTEREST\
\ REPORTING; 12/17/2001\n3. [\\*\\*RBC\\*\\*](https://files.brokercheck.finra.org/firm/firm\\\
_31194.pdf) \\*\\*(Royal Bank of Canada):\\*\\*\n 1. pg 550; FAILURE TO REPORT\
\ SHORT SALE TRANSACTIONS WITH INDICATOR; 9/28/1999\n 2. pg 507; SHORT SALE VIOLATION;\
\ 11/21/1999\n 3. pg 426; FAILURE TO REPORT SHORT SALE MODIFIER; 1/21/2003\n\n\
Ironically, I picked these 3 because they were the first going down the line..\
\ I'm not sure how to be any more objective about this.. Their entire FINRA report\
\ is littered with short sale violations. Before anyone asks \"how do you know\
\ they aren't ALL like that?\" The answer is- I checked. If you get caught for\
\ a short sale violation, chances are you will ALWAYS get caught for short sale\
\ violations. Why? Because it's more profitable to do it and get caught, than\
\ it is to fix the problem.\n\nWanna know the 2nd worst part?\n\nSeveral comment\
\ letters asked the DTC to investigate the claims of naked shorting \\*\\*BEFORE\\\
*\\* coming to a decision on the proposal.. I never saw a document where they\
\ followed up on those requests.....\n\nNOW, wanna know the WORST part?\n\n[https:\\\
/\\/www.sec.gov\\/rules\\/sro\\/34-47978.htm#P99\\\\_35478](https://preview.redd.it/q6jk7as8rfu61.png?width=1057&format=png&auto=webp&s=c66aac021818993e6c23bb7fe96382de8cc9fe7e)\n\
\nThe DTC passed that rule change....\n\nThey not only prevented the issuers from\
\ removing their deposits, they also turned a 'blind-eye' to their participants\
\ manipulative short selling, even when there's public evidence of them doing\
\ so...\n\n....Those companies were being attacked with shares THEY put in the\
\ DTC, by institutions they can't even identify...\n\n\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\n..Let's take a quick breath\
\ and recap:\n\nThe DTC started using a computerized ledger and was very successful\
\ through the 80's. This evolved into trading systems that were also computerized,\
\ but not as sophisticated as they hoped.. They played a major part in the 1987\
\ crash, along with severely desynchronized derivatives trading.\n\nIn 2003, the\
\ DTC denied issuers the right to withdraw their deposits because those securities\
\ were in the control of participants, instead. When issuer A deposits stock into\
\ the DTC and participant B shorts those shares into the market, that's a form\
\ of [rehypothecation](https://www.investopedia.com/terms/r/rehypothecation.asp#:~:text=Rehypothecation%20is%20a%20practice%20whereby,or%20a%20rebate%20on%20fees).\
\ This is what so many issuers were trying to express in their comment letters.\
\ In addition, it hurts their company by driving down it's value. They felt robbed\
\ because the DTC was blatantly allowing it's participants to do this, and refused\
\ to give them back their shares..\n\nIt was critically important for me to paint\
\ that background.\n\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\n\n..now then....\n\nRemember when I mentioned the DTC's enrollee-\
\ Cede & Co.?\n\n[https:\\/\\/www.sec.gov\\/rules\\/sro\\/34-47978.htm#P19\\\\\
_6635 \\(section II\\)](https://preview.redd.it/97z3b2k9pju61.png?width=283&format=png&auto=webp&s=67ad209f338a0ccebfaee09cd43944730ac35279)\n\
\nI'll admit it: I didn't think they were that relevant. I focused so much on\
\ the DTC that I didn't think to check into their enrollee...\n\n..Wish I did....\n\
\n[https:\\/\\/www.americanbanker.com\\/news\\/you-dont-really-own-your-securities-can-blockchains-fix-that](https://preview.redd.it/oqpj59jypju61.png?width=830&format=png&auto=webp&s=a7de5c100699c85132b531b501b79a8bafcdfa18)\n\
\nThat's right.... Cede & Co. hold a \"master certificate\" in their vault, which\
\ \\*\\*NEVER\\*\\* leaves. Instead, they issue an \\*IOU\\* for that master certificate..\n\
\n​\n\nDidn't we JUST finish talking about why this is such a major flaw\
\ in our system..? And that was almost 20 years ago...\n\n\\*\\*Here comes the\
\ mind f\\\\*ck\\*\\*\n\n[https:\\/\\/smithonstocks.com\\/part-8-illegal-naked-shorting-series-who-or-what-is-cede-and-what-role-does-cede-play-in-the-trading-of-stocks\\\
/](https://preview.redd.it/o4xemx63rju61.png?width=1117&format=png&auto=webp&s=26f60bceb160cefcd95b0d55d2b375f4058981e2)\n\
\n[https:\\/\\/smithonstocks.com\\/part-8-illegal-naked-shorting-series-who-or-what-is-cede-and-what-role-does-cede-play-in-the-trading-of-stocks\\\
/](https://preview.redd.it/1yfr9x0arju61.png?width=1109&format=png&auto=webp&s=066cac93b0c8fb05e617c81e9fc63eeacb847d4f)\n\
\n\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\
\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\\
_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_\n\
\nNow.....\n\nYou wanna know the BEST part???\n\n\\*I found a list of all the\
\ DTC\\* [\\*participants\\*](https://www.dtcc.com/-/media/Files/Downloads/client-center/DTC/alpha.pdf)\
\ \\*that are responsible for this mess..\\*\n\n\\*\\*I've got your name, number,\
\ and I'm coming for you-\\*\\* \\*\\*\\*ALL OF YOU\\*\\*\\*\n\n​\n\n​\n\
\n\\*\\*\\*to be continued.\\*\\*\\*\n\n\\*\\*DIAMOND.F\\\\*CKING.HANDS\\*\\*"
- LEAVE ROBINHOOD. They dont deserve to make money off us after the millions they
caused in losses. It might take a couple of days, but send Robinhood to the ground
and GME to the moon.
- source_sentence: 'GO WATCH INSIDE JOB. I''ll be back later. I need a break after
typing this up. CYA LATER APES.
Edit: \*\*Please understand that the majority of this post is a summary of that
film (section 1) with paraphrasing and direct quotes\*\*. I take no credit for
the amazing work that they''ve done! I''ve left a note in the post as well.
The remainder of the post (sections 2-3) is pulling from other sources to tie
everything together with the current market conditions, the SLR requirement expiration,
the mortgage default protections expiring, and the DTC, ICC, OCC rules.
[https://en.wikipedia.org/wiki/Inside\\_Job\\_(2010\\_film)](https://en.wikipedia.org/wiki/Inside\_Job\_(2010\_film))
Edit: Free on youtube [https://youtu.be/T2IaJwkqgPk](https://youtu.be/T2IaJwkqgPk)
thanks to /u/dcarmona!'
sentences:
- "If my underlying assumption is incorrect, please elucidate me. \n\nThat said,\
\ I know of several family members who worked as grocers and retail workers and\
\ they were able to buy their homes in the 70s and eventually paid them off. \n\
\nI, on the other hand, have a well-paying job, a graduate degree, and I’m also\
\ married to a partner with a great job. \n\nYet, had it not been for inheriting\
\ the equity from my grocer and retail worker relatives, I would never have been\
\ able to affordably buy my townhouse. \n\nIn contrast, similarly sized 2 or 3\
\ bedroom apartments for rent in my area are now priced at about $3,500 a month.\
\ At $15 an hour, that would equate to 67% of a couple’s pre-tax income on housing\
\ alone."
- "​\n\n# 0. Preface\n\nI am not a financial advisor, and I do not provide\
\ financial advice. Many thoughts here are my opinion, and others can be speculative.\n\
\nTL;DR - \\*\\*(Though I think you REALLY should consider reading because it\
\ is important to understand what is going on\\*\\*):\n\n\\* The market crash\
\ of 2008 never finished. It was can-kicked and the same people who caused the\
\ crash have \\*\\*still\\*\\* been running rampant doing the \\*\\*same\\*\\\
* \\*\\*bullshit in the derivatives market\\*\\* as that market continues to be\
\ unregulated. They're profiting off of short-term gains at the risk of killing\
\ their institutions and potentially the global economy. \\*\\*Only this time\
\ it is much, much worse.\\*\\*\n\\* The bankers abused smaller amounts of leverage\
\ for the 2008 bubble and have since abused much higher amounts of leverage -\
\ creating an even larger speculative bubble. Not just in the stock market and\
\ derivatives market, but also in the crypt0 market, upwards of 100x leverage.\n\
\\* COVID came in and rocked the economy to the point where the Fed is now pinned\
\ between a rock and a hard place. In order to buy more time, the government triggered\
\ a flurry of protective measures, such as mortgage forbearance, expiring end\
\ of Q2 on June 30th, 2021, and SLR exemptions, which expired March 31, 2021.\
\ \\*\\*The market was going to crash regardless. GME was and never will be the\
\ reason for the market crashing.\\*\\*\n\\* The rich made a fatal error in \\\
*\\*way\\*\\* overshorting stocks. There is a potential for their decades of sucking\
\ money out of taxpayers to be taken back. The derivatives market is potentially\
\ a \\*\\*$1 Quadrillion market\\*\\*. \"Meme prices\" are not meme prices. There\
\ is so much money in the world, and you are just accustomed to thinking the \"\
meme prices\" are too high to feasibly reach.\n\\* The DTC, ICC, OCC have been\
\ passing rules and regulations (auction and wind-down plans) so that they can\
\ easily eat up competition and consolidate power once again like in 2008. The\
\ people in charge, including Gary Gensler, are not your friends.\n\\* The DTC,\
\ ICC, OCC are also passing rules to make sure that retail will \\*\\*never\\\
*\\* be able to to do this again. \\*\\*These rules are for the future market\
\ (post market crash) and they never want anyone to have a chance to take their\
\ game away from them again\\*\\*. These rules are not to start the MOASS. They\
\ are indirectly regulating retail so that a short squeeze condition can never\
\ occur after GME.\n\\* The COVID pandemic exposed a lot of banks through the\
\ Supplementary Leverage Ratio (SLR) where mass borrowing (leverage) almost made\
\ many banks default. Banks have account 'blocks' on the Fed's balance sheet which\
\ holds their treasuries and deposits. \\*\\*The SLR exemption made it so that\
\ these treasuries and deposits of the banks 'accounts' on the Fed's balance sheet\
\ were not calculated into SLR, which allowed them to boost their SLR until March\
\ 31, 2021 and avoid defaulting. Now, they must extract treasuries from the Fed\
\ in reverse repo to avoid defaulting from SLR requirements. This results in the\
\ reverse repo market explosion as they are scrambling to survive due to their\
\ mass leverage.\\*\\*\n\\* This is not a \"retail vs. Melvin/Point72/Citadel\"\
\ issue. This is a \"retail vs. \\*\\*Mega Banks\\*\\*\" issue. The rich, and\
\ I mean \\*\\*all of Wall Street,\\*\\* are trying \\*\\*desperately\\*\\* to\
\ shut GameStop down because it has the chance to suck out trillions if not hundreds\
\ of trillions from the game they've played for decades. They've rigged this game\
\ since the 1990's when derivatives were first introduced. \\*\\*Do you really\
\ think they, including the Fed, wouldn't pull all the stops now to try to get\
\ you to sell?\\*\\*\n\nEnd TL;DR\n\n​\n\nA ton of the information provided\
\ in this post is from the movie \\*\\*Inside Job (2010)\\*\\*. I am paraphrasing\
\ from the movie as well as taking direct quotes, so please understand that a\
\ bunch of this information is a summary of that film.\n\nI understand that \\\
*\\*The Big Short (2015)\\*\\* is much more popular here, due to it being a more\
\ Hollywood style movie, but it does not go into such great detail of the conditions\
\ that led to the crash - and how things haven't even changed. But in fact, got\
\ worse, and led us to where we are now.\n\nSeriously. \\*\\*Go\\*\\*. \\*\\*Watch\\\
*\\*. \\*\\*Inside Job\\*\\*. It is a documentary with interviews of many people,\
\ including those who were involved in the Ponzi Scheme of the derivative market\
\ bomb that led to the crash of 2008, and their continued lobbying to influence\
\ the Government to keep regulations at bay.\n\n​\n\n[Inside Job \\(2010\\\
) Promotional](https://preview.redd.it/vvdd32qkei571.png?width=776&format=png&auto=webp&s=982445a99f17af054bd351990017e364b137cf02)\n\
\n​\n\n# 1. The Market Crash Of 2008\n\n# 1.1 The Casino Of The Financial\
\ World: The Derivatives Market\n\nIt all started back in the 1990's when the\
\ \\*\\*Derivative Market\\*\\* was created. This was the opening of the literal\
\ Casino in the financial world. These are bets placed upon an underlying asset,\
\ index, or entity, and are \\*\\*very\\*\\* risky. Derivatives are contracts\
\ between two or more parties that derives its value from the performance of the\
\ underlying asset, index, or entity.\n\nOne such derivative many are familiar\
\ with are \\*\\*options\\*\\* (CALLs and PUTs). Other examples of derivatives\
\ are \\*\\*fowards\\*\\*, \\*\\*futures\\*\\*, \\*\\*swaps\\*\\*, and variations\
\ of those such as \\*\\*Collateralized Debt Obligations (CDOs)\\*\\*, and \\\
*\\*Credit Default Swaps (CDS)\\*\\*.\n\nThe potential to make money off of these\
\ trades is \\*\\*insane\\*\\*. Take your regular CALL option for example. You\
\ no longer take home a 1:1 return when the underlying stock rises or falls $1.\
\ Your returns can be amplified by magnitudes more. Sometimes you might make a\
\ 10:1 return on your investment, or 20:1, and so forth.\n\nNot only this, you\
\ can grab leverage by borrowing cash from some other entity. This allows your\
\ bets to potentially return that much more money. You can see how this gets out\
\ of hand really fast, because the amount of cash that can be gained absolutely\
\ skyrockets versus traditional investments.\n\nAttempts were made to regulate\
\ the derivatives market, but due to mass lobbying from Wall Street, regulations\
\ were continuously shut down. \\*\\*People continued to try to pass regulations,\
\ until in 2000, the\\*\\* [Commodity Futures Modernization Act](https://en.wikipedia.org/wiki/Commodity\\\
_Futures\\_Modernization\\_Act\\_of\\_2000) \\*\\*banned the regulation of derivatives\
\ outright\\*\\*.\n\nAnd of course, once the Derivatives Market was left unchecked,\
\ it was off to the races for Wall Street to begin making tons of risky bets and\
\ surging their profits.\n\nThe Derivative Market exploded in size once regulation\
\ was banned and de-regulation of the financial world continued. You can see as\
\ of 2000, the cumulative derivatives market was already out of control.\n\n[https:\\\
/\\/www.hilarispublisher.com\\/open-access\\/investment-banks-and-credit-institutions-the-ignored-and-unregulateddiversity-2151-6219-1000224.pdf](https://preview.redd.it/9igfmi69di571.png?width=578&format=png&auto=webp&s=27fefbf3443e8be528849221f2eadeb1a5c10833)\n\
\nThe Derivatives Market is big. \\*\\*Insanely big\\*\\*. Look at how it compares\
\ to \\*\\*Global Wealth\\*\\*.\n\n[https:\\/\\/www.visualcapitalist.com\\/all-of-the-worlds-money-and-markets-in-one-visualization-2020\\\
/](https://preview.redd.it/s22atssgdi571.png?width=1029&format=png&auto=webp&s=086dcebf3e710052f78b7490150203d0f8376b89)\n\
\nAt the bottom of the list are three derivatives entries, with \"Market Value\"\
\ and \"Notional Value\" called out.\n\nThe \"Market Value\" is the value of the\
\ derivative at its current trading price.\n\nThe \"Notional Value\" is the value\
\ of the derivative if it was at the strike price.\n\nE.g. A CALL option (a derivative)\
\ represents 100 shares of ABC stock with a strike of $50. Perhaps it is trading\
\ in the market at $1 per contract right now.\n\n\\* Market Value = 100 shares\
\ \\\\* $1.00 per contract = $100\n\\* Notional Value = 100 shares \\\\* $50 strike\
\ price = $5,000\n\n\\*\\*Visual Capitalist estimates that the cumulative Notional\
\ Value of derivatives is between $558 Trillion and $1 Quadrillion\\*\\*. So yeah.\
\ \\*\\*You\\*\\* are not going to cause a market crash if GME sells for millions\
\ per share. The rich are already priming the market crash through the Derivatives\
\ Market.\n\n# 1.2 CDOs And Mortgage Backed Securities\n\nDecades ago, the system\
\ of paying mortgages used to be between two parties. The buyer, and the loaner.\
\ Since the movement of money was between the buyer and the loaner, the loaner\
\ was very careful to ensure that the buyer would be able to pay off their loan\
\ and not miss payments.\n\nBut now, it's a chain.\n\n1. Home buyers will buy\
\ a loan from the lenders.\n2. The lenders will then sell those loans to Investment\
\ Banks.\n3. The Investment Banks then combine thousands of mortgages and other\
\ loans, including car loans, student loans, and credit card debt to create complex\
\ derivatives called \"\\*\\*Collateralized Debt Obligations (CDO's\\*\\*)\".\n\
4. The Investment Banks then pay Rating Agencies to rate their CDO's. This can\
\ be on a scale of \"AAA\", the best possible rating, equivalent to government-backed\
\ securities, all the way down to C/D, which are junk bonds and very risky. \\\
*\\*Many of these CDO's were given AAA ratings despite being filled with junk\\\
*\\*.\n5. The Investment Banks then take these CDO's and sell them to investors,\
\ including retirement funds, because that was the rating required for retirement\
\ funds as they would only purchase highly rated securities.\n6. Now when the\
\ homeowner pays their mortgage, the money flows directly into the investors.\
\ The investors are the main ones who will be hurt if the CDO's containing the\
\ mortgages begin to fail.\n\n[Inside Job \\(2010\\) - Flow Of Money For Mortgage\
\ Payments](https://preview.redd.it/0xtaww3ydi571.png?width=1493&format=png&auto=webp&s=f448a113043b043243efd879f174493bd33423fe)\n\
\n[https:\\/\\/www.investopedia.com\\/ask\\/answers\\/09\\/bond-rating.asp](https://preview.redd.it/uyk9ms4fei571.png?width=756&format=png&auto=webp&s=d61e9a0754b676e64a1f6c97277ba877e946fcb6)\n\
\n# 1.3 The Bubble of Subprime Loans Packed In CDOs\n\nThis system became a ticking\
\ timebomb due to this potential of free short-term gain cash. Lenders didn't\
\ care if a borrower could repay, so they would start handing out riskier loans.\
\ The investment banks didn't care if there were riskier loans, because the more\
\ CDO's sold to investors resulted in more profit. And the Rating Agencies didn't\
\ care because there were no regulatory constraints and there was no liability\
\ if their ratings of the CDO's proved to be wrong.\n\nSo they went wild and pumped\
\ out more and more loans, and more and more CDOs. Between 2000 and 2003, the\
\ number of mortgage loans made each year nearly quadrupled. They didn’t care\
\ about the quality of the mortgage - they cared about maximizing the volume and\
\ getting profit out of it.\n\nIn the early 2000s there was a huge increase in\
\ the riskiest loans - “Subprime Loans”. These are loans given to people who have\
\ low income, limited credit history, poor credit, etc. They are very at risk\
\ to not pay their mortgages. It was predatory lending, because it hunted for\
\ potential home buyers who would never be able to pay back their mortgages so\
\ that they could continue to pack these up into CDO's.\n\n[Inside Job \\(2010\\\
) - % Of Subprime Loans](https://preview.redd.it/wsr30iorei571.png?width=1447&format=png&auto=webp&s=59cf72f6eb8209d69e0a13ccf2f0127e69a45142)\n\
\nIn fact, the investment banks \\*\\*preferred\\*\\* subprime loans, because\
\ they carried higher interest rates and more profit for them.\n\n\\*\\*So the\
\ Investment Banks took these subprime loans, packaged the subprime loans up into\
\ CDO's, and many of them still received AAA ratings. These can be considered\
\ \"toxic CDO's\" because of their high ability to default and fail despite their\
\ ratings.\\*\\*\n\nPretty much \\*\\*anyone\\*\\* could get a home now. Purchases\
\ of homes and housing prices skyrocketed. It didn't matter because everyone in\
\ the chain was making money in an unregulated market.\n\n# 1.4 Short Term Greed\
\ At The Risk Of Institutional And Economic Failure\n\nIn Wall Street, annual\
\ cash bonuses started to spike. Traders and CEOs became extremely wealthy in\
\ this bubble as they continued to pump more toxic CDO's into the market. Lehman\
\ Bros. was one of the top underwriters of subprime lending and their CEO alone\
\ took home over $485 million in bonuses.\n\n[Inside Job \\(2010\\) Wall Street\
\ Bonuses](https://preview.redd.it/io87r9vxei571.png?width=1494&format=png&auto=webp&s=944300df8faf8da35d75de6f10fb951a6d230154)\n\
\nAnd it was all short-term gain, high risk, with no worries about the potential\
\ failure of your institution or the economy. When things collapsed, they would\
\ not need to pay back their bonuses and gains. They were literally risking the\
\ entire world economy for the sake of short-term profits.\n\nAND THEY EVEN TOOK\
\ IT FURTHER WITH LEVERAGE TO MAXIMIZE PROFITS.\n\nDuring the bubble from 2000\
\ to 2007, the investment banks were borrowing heavily to buy more loans and to\
\ create more CDO's. The ratio of banks borrowed money and their own money was\
\ their leverage. The more they borrowed, the higher their leverage. They abused\
\ leverage to continue churning profits. And are still abusing massive leverage\
\ to this day. It might even be much higher leverage today than what it was back\
\ in the Housing Market Bubble.\n\nIn 2004, Henry Paulson, the CEO of Goldman\
\ Sachs, helped lobby the SEC to relax limits on leverage, allowing the banks\
\ to sharply increase their borrowing. Basically, the SEC allowed investment banks\
\ to gamble a lot more. \\*\\*Investment banks would go up to about 33-to-1 leverage\
\ at the time of the 2008 crash\\*\\*. Which means if a 3% decrease occurred in\
\ their asset base, it would leave them insolvent. \\*\\*Henry Paulson would later\
\ become the Secretary Of The Treasury from 2006 to 2009\\*\\*. He was just one\
\ of many Wall Street executives to eventually make it into Government positions.\
\ Including the infamous Gary Gensler, the current SEC chairman, who helped block\
\ derivative market regulations.\n\n[Inside Job \\(2010\\) Leverage Abuse of 2008](https://preview.redd.it/k87x53h7fi571.png?width=1619&format=png&auto=webp&s=b12004d6bb3e70643516ef0477303f4652ccd348)\n\
\nThe borrowing exploded, the profits exploded, and it was all at the risk of\
\ obliterating their institutions and possibly the global economy. Some of these\
\ banks knew that they were \"too big to fail\" and could push for bailouts at\
\ the expense of taxpayers. Especially when they began planting their own executives\
\ in positions of power.\n\n# 1.5 Credit Default Swaps (CDS)\n\nTo add another\
\ ticking bomb to the system, AIG, the worlds largest insurance company, got into\
\ the game with another type of derivative. They began selling Credit Default\
\ Swaps (CDS).\n\nFor investors who owned CDO's, CDS's worked like an insurance\
\ policy. An investor who purchased a CDS paid AIG a quarterly premium. If the\
\ CDO went bad, AIG promised to pay the investor for their losses. Think of it\
\ like insuring a car. You're paying premiums, but if you get into an accident,\
\ the insurance will pay up (some of the time at least).\n\nBut unlike regular\
\ insurance, where you can only insure your car once, \\*\\*speculators could\
\ also purchase CDS's from AIG in order to bet against CDO's they didn't own\\\
*\\*. You could suddenly have a sense of rehypothecation where fifty, one hundred\
\ entities might now have insurance against a CDO.\n\n[Inside Job \\(2010\\) Payment\
\ Flow of CDS's](https://preview.redd.it/7xoupx0ffi571.png?width=1258&format=png&auto=webp&s=869beb0d99b9fbb4108cd5af692d0a6332fd52dd)\n\
\nIf you've watched The Big Short (2015), you might remember the Credit Default\
\ Swaps, because those are what Michael Burry and others purchased to bet against\
\ the Subprime Mortgage CDO's.\n\nCDS's were unregulated, so \\*\\*AIG didn’t\
\ have to set aside any money to cover potential losses\\*\\*. Instead, AIG paid\
\ its employees huge cash bonuses as soon as contracts were signed in order to\
\ incentivize the sales of these derivatives. But if the CDO's later went bad,\
\ AIG would be on the hook. It paid everyone short-term gains while pushing the\
\ bill to the company itself without worrying about footing the bill if shit hit\
\ the fan. People once again were being rewarded with short-term profit to take\
\ these massive risks.\n\nAIG’s Financial Products division in London issued over\
\ $500B worth of CDS's during the bubble. Many of these CDS's were for CDO's backed\
\ by subprime mortgages.\n\nThe 400 employees of AIGFP made $3.5B between 2000\
\ and 2007. And the head of AIGFP personally made $315M. \n\n# 1.6 The Crash And\
\ Consumption Of Banks To Consolidate Power\n\nBy late 2006, Goldman Sachs took\
\ it one step further. It didn’t just sell toxic CDO's, it started actively betting\
\ against them at the same time it was telling customers that they were high-quality\
\ investments.\n\nGoldman Sachs would purchase CDS's from AIG and bet against\
\ CDO's it didn’t own, and got paid when those CDO's failed. Goldman bought at\
\ least $22B in CDS's from AIG, and it was so much that Goldman realized AIG itself\
\ might go bankrupt (which later on it would and the Government had to bail them\
\ out). So Goldman spent $150M insuring themselves against AIG’s potential collapse.\
\ They purchased CDS's against AIG.\n\n[Inside Job \\(2010\\) Payment From AIG\
\ To Goldman Sachs If CDO's Failed](https://preview.redd.it/m54zv03yfi571.png?width=1411&format=png&auto=webp&s=f6cb605b4c9b36c22e60cd8205b80bd6ac770fac)\n\
\nThen in 2007, Goldman went even further. They started selling CDO's specifically\
\ designed so that the more money their customers lost, the more Goldman Sachs\
\ made.\n\nMany other banks did the same. They created shitty CDO's, sold them,\
\ while simultaneously bet that they would fail with CDS's. All of these CDO's\
\ were sold to customers as “safe” investments because of the complicit Rating\
\ Agencies.\n\nThe three rating agencies, Moody’s, S&P and Fitch, made billions\
\ of dollars giving high ratings to these risky securities. Moody’s, the largest\
\ ratings agency, quadrupled its profits between 2000 and 2007. The more AAA's\
\ they gave out, the higher their compensation and earnings were for the quarter.\
\ AAA ratings mushroomed from a handful in 2000 to thousands by 2006. Hundreds\
\ of billions of dollars worth of CDO's were being rated AAA per year. When it\
\ all collapsed and the ratings agencies were called before Congress, the rating\
\ agencies expressed that it was “their opinion” of the rating in order to weasel\
\ their way out of blame. Despite knowing that they were toxic and did not deserve\
\ anything above 'junk' rating.\n\n[Inside Job \\(2010\\) Ratings Agencies Profits](https://preview.redd.it/tto0v644gi571.png?width=1332&format=png&auto=webp&s=f4361dcc23801691d46ec88b241c7d5fa56e2aaf)\n\
\n[Inside Job \\(2010\\) - Insane Increase of AAA Rated CDOs](https://preview.redd.it/91dpnu78gi571.png?width=1259&format=png&auto=webp&s=1f196573f47a757a8bcca8b9e712c537be84cbe2)\n\
\nBy 2008, home foreclosures were skyrocketing. Home buyers in the subprime loans\
\ were defaulting on their payments. Lenders could no longer sell their loans\
\ to the investment banks. And as the loans went bad, dozens of lenders failed.\
\ The market for CDO's collapsed, leaving the investment banks holding hundreds\
\ of billions of dollars in loans, CDO's, and real estate they couldn’t sell.\
\ Meanwhile, those who purchased up CDS's were knocking at the door to be paid.\n\
\nIn March 2008, Bear Stearns ran out of cash and was acquired for $2 a share\
\ by JPMorgan Chase. The deal was backed by $30B in emergency guarantees by the\
\ Fed Reserve. This was just one instance of a bank getting consumed by a larger\
\ entity.\n\n[https:\\/\\/www.history.com\\/this-day-in-history\\/bear-stearns-sold-to-j-p-morgan-chase](https://preview.redd.it/gbgc30vlhi571.png?width=873&format=png&auto=webp&s=74def34d1783c5e3195492913370e6ae65670301)\n\
\nAIG, Bear Stearns, Lehman Bros, Fannie Mae, and Freddie Mac, were all AA or\
\ above rating days before either collapsing or being bailed out. Meaning they\
\ were 'very secure', yet they failed.\n\nThe Fed Reserve and Big Banks met together\
\ in order to discuss bailouts for different banks, and they decided to let Lehman\
\ Brothers fail as well.\n\nThe Government also then took over AIG, and a day\
\ after the takeover, asked the Government for $700B in bailouts for big banks.\
\ At this point in time, \\*\\*the person in charge of handling the financial\
\ crisis, Henry Paulson, former CEO of Goldman Sachs\\*\\*, worked with the chairman\
\ of the Federal Reserve to force AIG to pay Goldman Sachs some of its bailout\
\ money at 100-cents on the dollar. Meaning there was no negotiation of lower\
\ prices. \\*\\*Conflict of interest much?\\*\\*\n\nThe Fed and Henry Paulson\
\ also forced AIG to surrender their right to sue Goldman Sachs and other banks\
\ for fraud.\n\n\\*\\*This is but a small glimpse of the consolidation of power\
\ in big banks from the 2008 crash. They let others fail and scooped up their\
\ assets in the crisis.\\*\\*\n\n\\*\\*After the crash of 2008, big banks are\
\ more powerful and more consolidated than ever before. And the DTC, ICC, OCC\
\ rules are planning on making that worse through the auction and wind-down plans\
\ where big banks can once again consume other entities that default.\\*\\*\n\n\
# 1.7 The Can-Kick To Continue The Game Of Derivative Market Greed\n\nAfter the\
\ crisis, the financial industry worked harder than ever to fight reform. The\
\ financial sector, as of 2010, employed over 3000 lobbyists. More than five for\
\ each member of Congress. Between 1998 and 2008 the financial industry spent\
\ over $5B on lobbying and campaign contributions. And ever since the crisis,\
\ they’re spending even more money.\n\nPresident Barack Obama campaigned heavily\
\ on \"Change\" and \"Reform\" of Wall Street, but when in office, nothing substantial\
\ was passed. But this goes back for decades - the Government has been in the\
\ pocket of the rich for a long time, both parties, both sides, and their influence\
\ through lobbying undoubtedly prevented any actual change from occurring.\n\n\
So their game of playing the derivative market was green-lit to still run rampant\
\ following the 2008 crash and mass bailouts from the Government at the expense\
\ of taxpayers.\n\nThere's now more consolidation of banks, more consolidation\
\ of power, more years of deregulation, and over a decade that they used to continue\
\ the game. And just like in 2008, it's happening again. We're on the brink of\
\ another market crash and potentially a global financial crisis.\n\n# \n\n​\n\
\n# 2. The New CDO Game, And How COVID Uppercut To The System\n\n# 2.1 Abuse Of\
\ Commercial Mortgage Backed Securities\n\nIt's not just /u/atobitt's \"House\
\ Of Cards\" where the US Treasury Market has been abused. It is abuse of many\
\ forms of collateral and securities this time around.\n\nIt's the \\*\\*same\
\ thing\\*\\* as 2008, but much worse due to even higher amounts of leverage in\
\ the system on top of massive amounts of liquidity and potential inflation from\
\ stimulus money of the COVID crisis.\n\nHere's an excerpt from [The Bigger Short:\
\ Wall Street's Cooked Books Fueled The Financial Crisis of 2008. It's Happening\
\ Again](https://theintercept.com/2021/04/20/wall-street-cmbs-dollar-general-ladder-capital/):\n\
\n>A longtime industry analyst has uncovered creative accounting on a startling\
\ scale in the commercial real estate market, in ways similar to the “liar loans”\
\ handed out during the mid-2000s for residential real estate, according to financial\
\ records examined by the analyst and reviewed by The Intercept. A recent, large-scale\
\ academic study backs up his conclusion, \\*\\*finding that banks such as Goldman\
\ Sachs and Citigroup have systematically reported erroneously inflated income\
\ data that compromises the integrity of the resulting securities.\\*\\* \n> \n\
>... \n> \n>The analyst’s findings, first reported by ProPublica last year, are\
\ the subject of a whistleblower complaint he filed in 2019 with the Securities\
\ and Exchange Commission. Moreover, the analyst has identified complex financial\
\ machinations by one financial institution, one that both issues loans and manages\
\ a real estate trust, that may ultimately help one of its top tenants — the low-cost,\
\ low-wage store Dollar General — flourish while devastating smaller retailers.\
\ \n> \n>This time, the issue is not a bubble in the housing market, \\*\\*but\
\ apparent widespread inflation of the value of commercial businesses, on which\
\ loans are based.\\*\\* \n> \n>... \n> \n>\\*\\*Now it may be happening again\\\
*\\* — this time not with residential mortgage-backed securities, based on loans\
\ for homes, \\*\\*but commercial mortgage-backed securities, or CMBS, based on\
\ loans for businesses.\\*\\* And this industrywide scheme is colliding with a\
\ collapse of the commercial real estate market amid the pandemic, which has \\\
*\\*business tenants across the country unable to make their payments.\\*\\*\n\
\nThey've been abusing Commercial Mortgage Backed Securities (CMBS) this time\
\ around, and potentially have still been abusing other forms of collateral -\
\ they might still be hitting MBS as well as treasury bonds per /u/atobitt's DD.\n\
\nJohn M. Griffin and Alex Priest released a study last November. They sampled\
\ almost 40,000 CMBS loans with a market capitalization of $650 billion underwritten\
\ from the beginning of 2013 to the end of 2019. \\*\\*Their findings were that\
\ large banks had 35% or more loans exhibiting 5% or greater income overstatements.\\\
*\\*\n\nThe below chart shows the overstatements of the biggest problem-making\
\ banks. The difference in bars is between samples taken from data between 2013-2015,\
\ and then data between 2016-2019. Almost every single bank experienced a positive\
\ move up over time of overstatements.\n\n>Unintentional overstatement should\
\ have occurred at random times. Or if lenders were assiduous and the overstatement\
\ was unwitting, one might expect it to diminish over time as the lenders discovered\
\ their mistakes. \\*\\*Instead, with almost every lender, the overstatement\\\
*\\* \\*\\*\\*increased\\*\\*\\* \\*\\*as time went on\\*\\*. - [Source](https://theintercept.com/2021/04/20/wall-street-cmbs-dollar-general-ladder-capital/)\n\
\n[https:\\/\\/theintercept.com\\/2021\\/04\\/20\\/wall-street-cmbs-dollar-general-ladder-capital\\\
/](https://preview.redd.it/5xmcu9hwhi571.png?width=846&format=png&auto=webp&s=66f636574bd66afd3512b9587981e4caaa381cf3)\n\
\nSo what does this mean? \\*\\*It means they've once again been handing out subprime\
\ loans (predatory loans). But this time to businesses through Commercial Mortgage\
\ Backed Securities.\\*\\*\n\nJust like Mortgage-Backed Securities from 2000 to\
\ 2007, the loaners will go around, hand out loans to businesses, and rake in\
\ the profits while having no concern over the potential for the subprime loans\
\ failing.\n\n# 2.2 COVID's Uppercut Sent Them Scrambling\n\nThe system was propped\
\ up to fail just like from the 2000-2007 Housing Market Bubble. Now we are in\
\ a speculative bubble of the entire market along with the Commercial Market Bubble\
\ due to continued mass leverage abuse of the world. \n\nHell - also in Crypt0currencies\
\ that were introduced after the 2008 crash. \\*\\*Did you know that you can get\
\ over 100x leverage in crypt0 right now? Imagine how terrifying that crash could\
\ be if the other markets fail.\\*\\*\n\nThere is SO. MUCH. LEVERAGE. ABUSE. IN.\
\ THE. WORLD. All it takes is one fatal blow to bring it all down - \\*\\*and\
\ it sure as hell looks like COVID was that uppercut to send everything into a\
\ death spiral.\\*\\*\n\nWhen COVID hit, many people were left without jobs. Others\
\ had less pay from the jobs they kept. It rocked the financial world and it was\
\ so unexpected. Apartment residents would now become delinquent, causing the\
\ apartment complexes to become delinquent. Business owners would be hurting for\
\ cash to pay their mortgages as well due to lack of business. The subprime loans\
\ all started to become a really big issue.\n\nDelinquency rates of Commercial\
\ Mortgages started to \\*\\*skyrocket\\*\\* when the COVID crisis hit. They even\
\ surpassed 2008 levels in March of 2020. Remember what happened in 2008 when\
\ this occurred? \\*\\*When delinquency rates went up on mortgages in 2008, the\
\ CDO's of those mortgages began to fail. But, this time, they can-kicked it because\
\ COVID caught them all off guard.\\*\\*\n\n[https:\\/\\/theintercept.com\\/2021\\\
/04\\/20\\/wall-street-cmbs-dollar-general-ladder-capital\\/](https://preview.redd.it/cqbceix0ii571.png?width=848&format=png&auto=webp&s=da81781094a31ae1293b019c4e24f68dfdccc634)\n\
\n# 2.3 Can-Kick Of COVID To Prevent CDO's From Defaulting Before Being Ready\n\
\nCOVID sent them \\*\\*Scrambling\\*\\*. They could not allow these CDO's to\
\ fail just yet, because they wanted to get their rules in place to help them\
\ consume other failing entities at a whim. \n\nLike in 2008, they wanted to not\
\ only protect themselves when the nuke went off from these decades of derivatives\
\ abuse, they wanted to be able to scoop up the competition easily. That is when\
\ the DTC, ICC, and OCC began drafting their auction and wind-down plans.\n\n\
In order to buy time, they began tossing out emergency relief \"protections\"\
\ for the economy. Such as preventing mortgage defaults which would send their\
\ CDO's tumbling. \\*\\*This protection ends on June 30th, 2021\\*\\*.\n\nAnd\
\ guess what? \\*\\*Many people are still at risk of being delinquent\\*\\*. [This\
\ article](https://therealdeal.com/issues\\_articles/defusing-the-forbearance-time-bomb/)\
\ was posted just \\*\\*yesterday\\*\\*. The moment these protection plans lift,\
\ we can see a surge in foreclosures as delinquent payments have accumulated over\
\ the past year.\n\nWhen everyone, including small business owners who were attacked\
\ with predatory loans, begin to default from these emergency plans expiring,\
\ it can lead to the CDO's themselves collapsing. \\*\\*Which is exactly what\
\ triggered the 2008 recession\\*\\*.\n\n[https:\\/\\/www.housingwire.com\\/articles\\\
/mortgage-forbearance-drops-as-expiration-date-nears\\/](https://preview.redd.it/b68fsf5aii571.png?width=945&format=png&auto=webp&s=daa8c725185480d988802023a27291ee782b5c5f)\n\
\n# 2.4 SLR Requirement Exemption - Why The Reverse Repo Is Blowing Up\n\nAnother\
\ big issue exposed from COVID is when SLR requirements were leaned during the\
\ pandemic. They had to pass a quick measure to protect the banks from defaulting\
\ in April of 2020.\n\n>In a brief announcement, the Fed said it would allow a\
\ change to the \\*\\*supplementary leverage ratio to expire March 31\\*\\*. The\
\ initial move, announced April 1, 2020, \\*\\*allowed banks to exclude Treasurys\
\ and deposits with Fed banks from the calculation of the leverage ratio\\*\\\
*. - [Source](https://www.cnbc.com/2021/03/19/the-fed-will-not-extend-a-pandemic-crisis-rule-that-had-allowed-banks-to-relax-capital-levels.html)\n\
\nWhat can you take from the above? \n\n\\*\\*SLR is based on the banks deposits\
\ with the Fed itself. It is the treasuries and deposits that the banks have on\
\ the Fed's balance sheet. Banks have an 'account block' on the Fed's balance\
\ sheet that holds treasuries and deposits. The SLR pandemic rule allowed them\
\ to neglect these treasuries and deposits from their SLR calculation, and it\
\ boosted their SLR value, allowing them to survive defaults.\\*\\*\n\nThis is\
\ a \\*\\*big\\*\\*, \\*\\*big\\*\\*, \\*\\*BIG\\*\\* sign that \\*\\*the banks\
\ are way overleveraged by borrowing tons of money just like in 2008.\\*\\*\n\n\
The SLR is the \"Supplementary Leverage Ratio\" and they enacted quick to allow\
\ it so banks wouldn't fail under mass leverage for failing to maintain enough\
\ equity.\n\n>The supplementary leverage ratio is the US implementation of the\
\ Basel III Tier 1 \\*\\*leverage ratio\\*\\*, with which \\*\\*banks calculate\
\ the amount of common equity capital they must hold relative to their total leverage\
\ exposure\\*\\*. \\*\\*Large US banks must hold 3%\\*\\*. \\*\\*Top-tier bank\
\ holding companies must also hold an extra 2% buffer, for a total of 5%\\*\\\
*. The SLR, which does not distinguish between assets based on risk, is conceived\
\ as a backstop to risk-weighted capital requirements. - [Source](https://www.risk.net/definition/supplementary-leverage-ratio-slr)\n\
\n[Here is an exposure of their SLR](https://www.fool.com/investing/2020/07/26/which-of-the-large-us-banks-is-most-leveraged.aspx)\
\ from earlier this year. The key is to have \\*\\*high SLR, above 5%, as a top-tier\
\ bank\\*\\*:\n\n|Bank|Supplementary Leverage Ratio (SLR)|\n|:-|:-|\n|JP Morgan\
\ Chase|6.8%|\n|Bank Of America|7%|\n|Citigroup|6.7%|\n|Goldman Sachs|6.7%|\n\
|Morgan Stanley|7.3%|\n|Bank of New York Mellon|8.2%|\n|State Street|8.3%|\n\n\
The SLR protection ended on March 31, 2021. Guess what started to happen just\
\ after?\n\nT\\*\\*he reverse repo market started to explode. This is VERY unusual\
\ behavior because it is not at a quarter-end where quarter-ends have significant\
\ strain on the economy. The build-up over time implies that there is significant\
\ strain on the market AS OF ENTERING Q2 (April 1st - June 30th).\\*\\*\n\n[https:\\\
/\\/fred.stlouisfed.org\\/series\\/RRPONTSYD](https://preview.redd.it/ijp4wkxdii571.png?width=1455&format=png&auto=webp&s=46f67d7efcc98ee475ba27fa41850fbf5d894064)\n\
\n\\*\\*Speculation: SLR IS DEPENDENT ON THEIR DEPOSITS WITH THE FED ITSELF. THEY\
\ NEED TO EXTRACT TREASURIES OVER NIGHT TO KEEP THEM OFF THE FED'S BALANCE SHEETS\
\ TO PREVENT THEMSELVES FROM FAILING SLR REQUIREMENTS AND DEFAULTING DUE TO MASS\
\ OVERLEVERAGE. EACH BANK HAS AN ACCOUNT ON THE FED'S BALANCE SHEET, WHICH IS\
\ WHAT SLR IS CALCULATED AGAINST. THIS IS WHY IT IS EXPLODING. THEY ARE ALL STRUGGLING\
\ TO MEET SLR REQUIREMENTS.\\*\\*\n\n# 2.5 DTC, ICC, OCC Wind-Down and Auction\
\ Plans; Preparing For More Consolidation Of Power\n\nWe've seen some interesting\
\ rules from the DTC, ICC, and OCC. For the longest time we thought this was all\
\ surrounding GameStop. Guess what. \\*\\*They aren't all about GameStop\\*\\\
*. Some of them are, but not all of them.\n\n\\*\\*They are furiously passing\
\ these rules because the COVID can-kick can't last forever. The Fed is dealing\
\ with the potential of runaway inflation from COVID stimulus and they can't allow\
\ the overleveraged banks to can-kick any more. They need to resolve this as soon\
\ as possible. June 30th could be the deadline because of the potential for CDO's\
\ to begin collapsing.\\*\\*\n\nLet's revisit a few of these rules. The most important\
\ ones, in my opinion, because they shed light on the bullshit they're trying\
\ to do once again: Scoop up competitors at the cheap, and protect themselves\
\ from defaulting as well.\n\n\\* \\*\\*DTC-004:\\*\\* Wind-down and auction plan.\
\ - [Link](https://www.sec.gov/rules/sro/dtc/2021/34-91429.pdf)\n\\* \\*\\*ICC-005:\\\
*\\* Wind-down and auction plan. - [Link](https://www.sec.gov/rules/sro/icc/2021/34-91806.pdf)\n\
\\* \\*\\*OCC-004:\\*\\* Auction plan. Allows third parties to join in. - [Link](https://www.sec.gov/rules/sro/occ/2021/34-91935.pdf)\n\
\\* \\*\\*OCC-003\\*\\*: Shielding plan. Protects the OCC. - [Link](https://www.sec.gov/rules/sro/occ/2021/34-92038.pdf)\n\
\nEach of these plans, in brief summary, allows each branch of the market to protect\
\ themselves in the event of major defaults of members. They also \\*\\*allow\
\ members to scoop up assets of defaulting members\\*\\*.\n\nWhat was that? Scooping\
\ up assets? \\*\\*In other words it is more concentration of power\\*\\*. \\\
*\\*Less competition\\*\\*.\n\nI would not be surprised if many small and large\
\ Banks, Hedge Funds, and Financial Institutions evaporate and get consumed after\
\ this crash and we're left with just a select few massive entities. That is,\
\ after all, exactly what they're planning for.\n\nThey could not allow the COVID\
\ crash to pop their massive speculative derivative bubble so soon. It came too\
\ sudden for them to not all collapse instead of just a few of them. It would\
\ have obliterated the entire economy even more so than it will once this bomb\
\ is finally let off. They needed more time to prepare so that they could feast\
\ when it all comes crashing down.\n\n# 2.6 Signs Of Collapse Coming - ICC-014\
\ - Incentives For Credit Default Swaps\n\nA comment on this subreddit made me\
\ revisit a rule passed by the ICC. It flew under the radar and is another sign\
\ for a crash coming.\n\nThis is [ICC-014](https://www.sec.gov/rules/sro/icc/2021/34-91922.pdf).\
\ Passed and effective as of June 1st, 2021.\n\nSeems boring at first. Right?\
\ That's why it flew under the radar?\n\nBut now that you know the causes of the\
\ 2008 market crash and how toxic CDO's were packaged together, and then CDS's\
\ were used to bet against those CDO's, check out what ICC-014 is doing \\*\\\
*as of June 1st\\*\\*.\n\n[ICC-014 Proposed Discounts On Credit Default Index\
\ Swaptions](https://preview.redd.it/phrxcouvii571.png?width=731&format=png&auto=webp&s=469560cf06458b51b1b5439d84062e9f6e04bda4)\n\
\n\\*\\*They are providing incentive programs to purchase Credit Default Swap\
\ Indexes. These are like standard CDS's, but packaged together like an index.\
\ Think of it like an index fund.\\*\\*\n\n\\*\\*This is allowing them to bet\
\ against a wide range of CDO's or other entities at a cheaper rate. Buyers can\
\ now bet against a wide range of failures in the market. They are allowing upwards\
\ of 25% discounts.\\*\\*\n\nThere's many more indicators that are pointing to\
\ a market collapse. But I will leave that to you to investigate more. Here is\
\ quite a scary compilation of charts relating the current market trends to the\
\ crashes of Black Monday, The Internet Bubble, The 2008 Housing Market Crash,\
\ and Today.\n\n[Summary of Recent Warnings Re Intermediate Trend In Equities](https://preview.redd.it/y4reiv86hi571.jpg?width=550&format=pjpg&auto=webp&s=8845b7b90adf28409772483c6eeeef1763bbaaaf)\n\
\n​\n\n​\n\n​\n\n# 3. The Failure Of The 1% - How GameStop\
\ Can Deal A Fatal Blow To Wealth Inequality\n\n# 3.1 GameStop Was Never Going\
\ To Cause The Market Crash\n\nGameStop was meant to die off. The rich bet against\
\ it many folds over, and it was on the brink of Bankruptcy before many conditions\
\ led it to where it is today.\n\nIt was never going to cause the market crash.\
\ And it never will cause the crash. The short squeeze is a result of high abuse\
\ of the derivatives market over the past decade, where Wall Street's abuse of\
\ this market has primed the economy for another market crash on their own.\n\n\
We can see this because when COVID hit, GameStop was a non-issue in the market.\
\ The CDO market around CMBS was about to collapse on its own because of the instantaneous\
\ recession which left mortgage owners delinquent.\n\nIf anyone, be it the media,\
\ the US Government, or others, try to blame this crash on GameStop or anything\
\ \\*\\*other than the Banks and Wall Street\\*\\*, \\*\\*they are WRONG.\\*\\\
*\n\n# 3.2 The Rich Are Trying To Kill GameStop. They Are Terrified\n\nIn January,\
\ the SI% was reported to be 140%. But it is very likely that it was \\*\\*underreported\
\ at that time\\*\\*. Maybe it was 200% back then. 400%. 800%. Who knows. From\
\ the above you can hopefully gather that Wall Street \\*\\*takes on massive risks\
\ all the time, they do not care as long as it churns them short-term profits\\\
*\\*. There is loads of evidence pointing to shorts never covering by hiding their\
\ SI% through malicious options practices, and manipulating the price every step\
\ of the way.\n\nThe conditions that led GameStop to where it is today is a miracle\
\ in itself, and the support of retail traders has led to expose a fatal mistake\
\ of the rich. \\*\\*Because a short position has infinite loss potential\\*\\\
*. There is SO much money in the world, especially in the derivatives market.\n\
\nThis should scream to you that any price target that \\*\\*you\\*\\* think is\
\ low, could very well be extremely low in \\*\\*YOUR\\*\\* perspective. You might\
\ just be accustomed to thinking \"$X price floor is too much money. There's no\
\ way it can hit that\". I used to think that too, until I dove deep into this\
\ bullshit.\n\nThe market crashing no longer was a matter of simply scooping up\
\ defaulters, their assets, and consolidating power. The rich now have to worry\
\ about the potential of \\*\\*infinite\\*\\* losses from GameStop and possibly\
\ other meme stocks with high price floor targets some retail have.\n\nIt's not\
\ a fight against Melvin / Citadel / Point72. \\*\\*It's a battle against the\
\ entire financial world\\*\\*. There is even speculation from multiple people\
\ that the Fed is even being complicit right now in helping suppress GameStop.\
\ \\*\\*Their whole game is at risk here.\\*\\*\n\n\\*\\*Don't you think they'd\
\ fight tooth-and-nail to suppress this and try to get everyone to sell?\\*\\\
*\n\n\\*\\*That they'd pull every trick in the book to make you think that they've\
\ covered?\\*\\*\n\nThe amount of money they could lose is unfathomable.\n\nWith\
\ the collapsing SI%, it is mathematically impossible for the squeeze to have\
\ happened - its mathematically impossible for them to have covered. /u/atobitt\
\ also discusses this in [House of Cards Part 2](https://www.reddit.com/r/Superstonk/comments/nlwaxv/house\\\
_of\\_cards\\_part\\_2/).\n\n[https:\\/\\/www.thebharatexpressnews.com\\/short-squeeze-could-save-gamestop-investors-a-third-time\\\
/](https://preview.redd.it/6hge0pxfhi571.png?width=871&format=png&auto=webp&s=aab736cc279cc727524d2cf96384ea3e33109250)\n\
\nAnd in regards to all the other rules that look good for the MOASS - I see them\
\ in a negative light.\n\nThey are passing NSCC-002/801, DTC-005, and others,\
\ in order to prevent a GameStop situation from \\*\\*ever\\*\\* occurring again.\n\
\nThey realized how much power retail could have from piling into a short squeeze\
\ play. These new rules will snap new emerging short squeezes instantly if the\
\ conditions of a short squeeze ever occur again. There will \\*\\*never\\*\\\
* be a GameStop situation after this.\n\nIt's their game after all. They've been\
\ abusing the derivative market game for decades and GameStop is a huge threat.\
\ It was supposed to be, \"crash the economy and run with the money\". Not \"\
crash the economy and pay up to retail\". But GameStop was a flaw exposed by their\
\ greed, the COVID crash, and the quick turn-around of the company to take it\
\ away from the brink of bankruptcy. \n\nThe rich are now at risk of losing that\
\ money and insane amounts of cash that they've accumulated over the years from\
\ causing the Internet Bubble Crash of 2000, and the Housing Market Crash of 2008.\n\
\nSo, yeah, I'm going to be fucking greedy."
- "​\n\n# 0. Preface\n\nI am not a financial advisor, and I do not provide\
\ financial advice. Many thoughts here are my opinion, and others can be speculative.\n\
\nTL;DR - \\*\\*(Though I think you REALLY should consider reading because it\
\ is important to understand what is going on\\*\\*):\n\n\\* The market crash\
\ of 2008 never finished. It was can-kicked and the same people who caused the\
\ crash have \\*\\*still\\*\\* been running rampant doing the \\*\\*same\\*\\\
* \\*\\*bullshit in the derivatives market\\*\\* as that market continues to be\
\ unregulated. They're profiting off of short-term gains at the risk of killing\
\ their institutions and potentially the global economy. \\*\\*Only this time\
\ it is much, much worse.\\*\\*\n\\* The bankers abused smaller amounts of leverage\
\ for the 2008 bubble and have since abused much higher amounts of leverage -\
\ creating an even larger speculative bubble. Not just in the stock market and\
\ derivatives market, but also in the crypt0 market, upwards of 100x leverage.\n\
\\* COVID came in and rocked the economy to the point where the Fed is now pinned\
\ between a rock and a hard place. In order to buy more time, the government triggered\
\ a flurry of protective measures, such as mortgage forbearance, expiring end\
\ of Q2 on June 30th, 2021, and SLR exemptions, which expired March 31, 2021.\
\ \\*\\*The market was going to crash regardless. GME was and never will be the\
\ reason for the market crashing.\\*\\*\n\\* The rich made a fatal error in \\\
*\\*way\\*\\* overshorting stocks. There is a potential for their decades of sucking\
\ money out of taxpayers to be taken back. The derivatives market is potentially\
\ a \\*\\*$1 Quadrillion market\\*\\*. \"Meme prices\" are not meme prices. There\
\ is so much money in the world, and you are just accustomed to thinking the \"\
meme prices\" are too high to feasibly reach.\n\\* The DTC, ICC, OCC have been\
\ passing rules and regulations (auction and wind-down plans) so that they can\
\ easily eat up competition and consolidate power once again like in 2008. The\
\ people in charge, including Gary Gensler, are not your friends.\n\\* The DTC,\
\ ICC, OCC are also passing rules to make sure that retail will \\*\\*never\\\
*\\* be able to to do this again. \\*\\*These rules are for the future market\
\ (post market crash) and they never want anyone to have a chance to take their\
\ game away from them again\\*\\*. These rules are not to start the MOASS. They\
\ are indirectly regulating retail so that a short squeeze condition can never\
\ occur after GME.\n\\* The COVID pandemic exposed a lot of banks through the\
\ Supplementary Leverage Ratio (SLR) where mass borrowing (leverage) almost made\
\ many banks default. Banks have account 'blocks' on the Fed's balance sheet which\
\ holds their treasuries and deposits. \\*\\*The SLR exemption made it so that\
\ these treasuries and deposits of the banks 'accounts' on the Fed's balance sheet\
\ were not calculated into SLR, which allowed them to boost their SLR until March\
\ 31, 2021 and avoid defaulting. Now, they must extract treasuries from the Fed\
\ in reverse repo to avoid defaulting from SLR requirements. This results in the\
\ reverse repo market explosion as they are scrambling to survive due to their\
\ mass leverage.\\*\\*\n\\* This is not a \"retail vs. Melvin/Point72/Citadel\"\
\ issue. This is a \"retail vs. \\*\\*Mega Banks\\*\\*\" issue. The rich, and\
\ I mean \\*\\*all of Wall Street,\\*\\* are trying \\*\\*desperately\\*\\* to\
\ shut GameStop down because it has the chance to suck out trillions if not hundreds\
\ of trillions from the game they've played for decades. They've rigged this game\
\ since the 1990's when derivatives were first introduced. \\*\\*Do you really\
\ think they, including the Fed, wouldn't pull all the stops now to try to get\
\ you to sell?\\*\\*\n\nEnd TL;DR\n\n​\n\nA ton of the information provided\
\ in this post is from the movie \\*\\*Inside Job (2010)\\*\\*. I am paraphrasing\
\ from the movie as well as taking direct quotes, so please understand that a\
\ bunch of this information is a summary of that film.\n\nI understand that \\\
*\\*The Big Short (2015)\\*\\* is much more popular here, due to it being a more\
\ Hollywood style movie, but it does not go into such great detail of the conditions\
\ that led to the crash - and how things haven't even changed. But in fact, got\
\ worse, and led us to where we are now.\n\nSeriously. \\*\\*Go\\*\\*. \\*\\*Watch\\\
*\\*. \\*\\*Inside Job\\*\\*. It is a documentary with interviews of many people,\
\ including those who were involved in the Ponzi Scheme of the derivative market\
\ bomb that led to the crash of 2008, and their continued lobbying to influence\
\ the Government to keep regulations at bay.\n\n​\n\n[Inside Job \\(2010\\\
) Promotional](https://preview.redd.it/vvdd32qkei571.png?width=776&format=png&auto=webp&s=982445a99f17af054bd351990017e364b137cf02)\n\
\n​\n\n# 1. The Market Crash Of 2008\n\n# 1.1 The Casino Of The Financial\
\ World: The Derivatives Market\n\nIt all started back in the 1990's when the\
\ \\*\\*Derivative Market\\*\\* was created. This was the opening of the literal\
\ Casino in the financial world. These are bets placed upon an underlying asset,\
\ index, or entity, and are \\*\\*very\\*\\* risky. Derivatives are contracts\
\ between two or more parties that derives its value from the performance of the\
\ underlying asset, index, or entity.\n\nOne such derivative many are familiar\
\ with are \\*\\*options\\*\\* (CALLs and PUTs). Other examples of derivatives\
\ are \\*\\*fowards\\*\\*, \\*\\*futures\\*\\*, \\*\\*swaps\\*\\*, and variations\
\ of those such as \\*\\*Collateralized Debt Obligations (CDOs)\\*\\*, and \\\
*\\*Credit Default Swaps (CDS)\\*\\*.\n\nThe potential to make money off of these\
\ trades is \\*\\*insane\\*\\*. Take your regular CALL option for example. You\
\ no longer take home a 1:1 return when the underlying stock rises or falls $1.\
\ Your returns can be amplified by magnitudes more. Sometimes you might make a\
\ 10:1 return on your investment, or 20:1, and so forth.\n\nNot only this, you\
\ can grab leverage by borrowing cash from some other entity. This allows your\
\ bets to potentially return that much more money. You can see how this gets out\
\ of hand really fast, because the amount of cash that can be gained absolutely\
\ skyrockets versus traditional investments.\n\nAttempts were made to regulate\
\ the derivatives market, but due to mass lobbying from Wall Street, regulations\
\ were continuously shut down. \\*\\*People continued to try to pass regulations,\
\ until in 2000, the\\*\\* [Commodity Futures Modernization Act](https://en.wikipedia.org/wiki/Commodity\\\
_Futures\\_Modernization\\_Act\\_of\\_2000) \\*\\*banned the regulation of derivatives\
\ outright\\*\\*.\n\nAnd of course, once the Derivatives Market was left unchecked,\
\ it was off to the races for Wall Street to begin making tons of risky bets and\
\ surging their profits.\n\nThe Derivative Market exploded in size once regulation\
\ was banned and de-regulation of the financial world continued. You can see as\
\ of 2000, the cumulative derivatives market was already out of control.\n\n[https:\\\
/\\/www.hilarispublisher.com\\/open-access\\/investment-banks-and-credit-institutions-the-ignored-and-unregulateddiversity-2151-6219-1000224.pdf](https://preview.redd.it/9igfmi69di571.png?width=578&format=png&auto=webp&s=27fefbf3443e8be528849221f2eadeb1a5c10833)\n\
\nThe Derivatives Market is big. \\*\\*Insanely big\\*\\*. Look at how it compares\
\ to \\*\\*Global Wealth\\*\\*.\n\n[https:\\/\\/www.visualcapitalist.com\\/all-of-the-worlds-money-and-markets-in-one-visualization-2020\\\
/](https://preview.redd.it/s22atssgdi571.png?width=1029&format=png&auto=webp&s=086dcebf3e710052f78b7490150203d0f8376b89)\n\
\nAt the bottom of the list are three derivatives entries, with \"Market Value\"\
\ and \"Notional Value\" called out.\n\nThe \"Market Value\" is the value of the\
\ derivative at its current trading price.\n\nThe \"Notional Value\" is the value\
\ of the derivative if it was at the strike price.\n\nE.g. A CALL option (a derivative)\
\ represents 100 shares of ABC stock with a strike of $50. Perhaps it is trading\
\ in the market at $1 per contract right now.\n\n\\* Market Value = 100 shares\
\ \\\\* $1.00 per contract = $100\n\\* Notional Value = 100 shares \\\\* $50 strike\
\ price = $5,000\n\n\\*\\*Visual Capitalist estimates that the cumulative Notional\
\ Value of derivatives is between $558 Trillion and $1 Quadrillion\\*\\*. So yeah.\
\ \\*\\*You\\*\\* are not going to cause a market crash if GME sells for millions\
\ per share. The rich are already priming the market crash through the Derivatives\
\ Market.\n\n# 1.2 CDOs And Mortgage Backed Securities\n\nDecades ago, the system\
\ of paying mortgages used to be between two parties. The buyer, and the loaner.\
\ Since the movement of money was between the buyer and the loaner, the loaner\
\ was very careful to ensure that the buyer would be able to pay off their loan\
\ and not miss payments.\n\nBut now, it's a chain.\n\n1. Home buyers will buy\
\ a loan from the lenders.\n2. The lenders will then sell those loans to Investment\
\ Banks.\n3. The Investment Banks then combine thousands of mortgages and other\
\ loans, including car loans, student loans, and credit card debt to create complex\
\ derivatives called \"\\*\\*Collateralized Debt Obligations (CDO's\\*\\*)\".\n\
4. The Investment Banks then pay Rating Agencies to rate their CDO's. This can\
\ be on a scale of \"AAA\", the best possible rating, equivalent to government-backed\
\ securities, all the way down to C/D, which are junk bonds and very risky. \\\
*\\*Many of these CDO's were given AAA ratings despite being filled with junk\\\
*\\*.\n5. The Investment Banks then take these CDO's and sell them to investors,\
\ including retirement funds, because that was the rating required for retirement\
\ funds as they would only purchase highly rated securities.\n6. Now when the\
\ homeowner pays their mortgage, the money flows directly into the investors.\
\ The investors are the main ones who will be hurt if the CDO's containing the\
\ mortgages begin to fail.\n\n[Inside Job \\(2010\\) - Flow Of Money For Mortgage\
\ Payments](https://preview.redd.it/0xtaww3ydi571.png?width=1493&format=png&auto=webp&s=f448a113043b043243efd879f174493bd33423fe)\n\
\n[https:\\/\\/www.investopedia.com\\/ask\\/answers\\/09\\/bond-rating.asp](https://preview.redd.it/uyk9ms4fei571.png?width=756&format=png&auto=webp&s=d61e9a0754b676e64a1f6c97277ba877e946fcb6)\n\
\n# 1.3 The Bubble of Subprime Loans Packed In CDOs\n\nThis system became a ticking\
\ timebomb due to this potential of free short-term gain cash. Lenders didn't\
\ care if a borrower could repay, so they would start handing out riskier loans.\
\ The investment banks didn't care if there were riskier loans, because the more\
\ CDO's sold to investors resulted in more profit. And the Rating Agencies didn't\
\ care because there were no regulatory constraints and there was no liability\
\ if their ratings of the CDO's proved to be wrong.\n\nSo they went wild and pumped\
\ out more and more loans, and more and more CDOs. Between 2000 and 2003, the\
\ number of mortgage loans made each year nearly quadrupled. They didn’t care\
\ about the quality of the mortgage - they cared about maximizing the volume and\
\ getting profit out of it.\n\nIn the early 2000s there was a huge increase in\
\ the riskiest loans - “Subprime Loans”. These are loans given to people who have\
\ low income, limited credit history, poor credit, etc. They are very at risk\
\ to not pay their mortgages. It was predatory lending, because it hunted for\
\ potential home buyers who would never be able to pay back their mortgages so\
\ that they could continue to pack these up into CDO's.\n\n[Inside Job \\(2010\\\
) - % Of Subprime Loans](https://preview.redd.it/wsr30iorei571.png?width=1447&format=png&auto=webp&s=59cf72f6eb8209d69e0a13ccf2f0127e69a45142)\n\
\nIn fact, the investment banks \\*\\*preferred\\*\\* subprime loans, because\
\ they carried higher interest rates and more profit for them.\n\n\\*\\*So the\
\ Investment Banks took these subprime loans, packaged the subprime loans up into\
\ CDO's, and many of them still received AAA ratings. These can be considered\
\ \"toxic CDO's\" because of their high ability to default and fail despite their\
\ ratings.\\*\\*\n\nPretty much \\*\\*anyone\\*\\* could get a home now. Purchases\
\ of homes and housing prices skyrocketed. It didn't matter because everyone in\
\ the chain was making money in an unregulated market.\n\n# 1.4 Short Term Greed\
\ At The Risk Of Institutional And Economic Failure\n\nIn Wall Street, annual\
\ cash bonuses started to spike. Traders and CEOs became extremely wealthy in\
\ this bubble as they continued to pump more toxic CDO's into the market. Lehman\
\ Bros. was one of the top underwriters of subprime lending and their CEO alone\
\ took home over $485 million in bonuses.\n\n[Inside Job \\(2010\\) Wall Street\
\ Bonuses](https://preview.redd.it/io87r9vxei571.png?width=1494&format=png&auto=webp&s=944300df8faf8da35d75de6f10fb951a6d230154)\n\
\nAnd it was all short-term gain, high risk, with no worries about the potential\
\ failure of your institution or the economy. When things collapsed, they would\
\ not need to pay back their bonuses and gains. They were literally risking the\
\ entire world economy for the sake of short-term profits.\n\nAND THEY EVEN TOOK\
\ IT FURTHER WITH LEVERAGE TO MAXIMIZE PROFITS.\n\nDuring the bubble from 2000\
\ to 2007, the investment banks were borrowing heavily to buy more loans and to\
\ create more CDO's. The ratio of banks borrowed money and their own money was\
\ their leverage. The more they borrowed, the higher their leverage. They abused\
\ leverage to continue churning profits. And are still abusing massive leverage\
\ to this day. It might even be much higher leverage today than what it was back\
\ in the Housing Market Bubble.\n\nIn 2004, Henry Paulson, the CEO of Goldman\
\ Sachs, helped lobby the SEC to relax limits on leverage, allowing the banks\
\ to sharply increase their borrowing. Basically, the SEC allowed investment banks\
\ to gamble a lot more. \\*\\*Investment banks would go up to about 33-to-1 leverage\
\ at the time of the 2008 crash\\*\\*. Which means if a 3% decrease occurred in\
\ their asset base, it would leave them insolvent. \\*\\*Henry Paulson would later\
\ become the Secretary Of The Treasury from 2006 to 2009\\*\\*. He was just one\
\ of many Wall Street executives to eventually make it into Government positions.\
\ Including the infamous Gary Gensler, the current SEC chairman, who helped block\
\ derivative market regulations.\n\n[Inside Job \\(2010\\) Leverage Abuse of 2008](https://preview.redd.it/k87x53h7fi571.png?width=1619&format=png&auto=webp&s=b12004d6bb3e70643516ef0477303f4652ccd348)\n\
\nThe borrowing exploded, the profits exploded, and it was all at the risk of\
\ obliterating their institutions and possibly the global economy. Some of these\
\ banks knew that they were \"too big to fail\" and could push for bailouts at\
\ the expense of taxpayers. Especially when they began planting their own executives\
\ in positions of power.\n\n# 1.5 Credit Default Swaps (CDS)\n\nTo add another\
\ ticking bomb to the system, AIG, the worlds largest insurance company, got into\
\ the game with another type of derivative. They began selling Credit Default\
\ Swaps (CDS).\n\nFor investors who owned CDO's, CDS's worked like an insurance\
\ policy. An investor who purchased a CDS paid AIG a quarterly premium. If the\
\ CDO went bad, AIG promised to pay the investor for their losses. Think of it\
\ like insuring a car. You're paying premiums, but if you get into an accident,\
\ the insurance will pay up (some of the time at least).\n\nBut unlike regular\
\ insurance, where you can only insure your car once, \\*\\*speculators could\
\ also purchase CDS's from AIG in order to bet against CDO's they didn't own\\\
*\\*. You could suddenly have a sense of rehypothecation where fifty, one hundred\
\ entities might now have insurance against a CDO.\n\n[Inside Job \\(2010\\) Payment\
\ Flow of CDS's](https://preview.redd.it/7xoupx0ffi571.png?width=1258&format=png&auto=webp&s=869beb0d99b9fbb4108cd5af692d0a6332fd52dd)\n\
\nIf you've watched The Big Short (2015), you might remember the Credit Default\
\ Swaps, because those are what Michael Burry and others purchased to bet against\
\ the Subprime Mortgage CDO's.\n\nCDS's were unregulated, so \\*\\*AIG didn’t\
\ have to set aside any money to cover potential losses\\*\\*. Instead, AIG paid\
\ its employees huge cash bonuses as soon as contracts were signed in order to\
\ incentivize the sales of these derivatives. But if the CDO's later went bad,\
\ AIG would be on the hook. It paid everyone short-term gains while pushing the\
\ bill to the company itself without worrying about footing the bill if shit hit\
\ the fan. People once again were being rewarded with short-term profit to take\
\ these massive risks.\n\nAIG’s Financial Products division in London issued over\
\ $500B worth of CDS's during the bubble. Many of these CDS's were for CDO's backed\
\ by subprime mortgages.\n\nThe 400 employees of AIGFP made $3.5B between 2000\
\ and 2007. And the head of AIGFP personally made $315M. \n\n# 1.6 The Crash And\
\ Consumption Of Banks To Consolidate Power\n\nBy late 2006, Goldman Sachs took\
\ it one step further. It didn’t just sell toxic CDO's, it started actively betting\
\ against them at the same time it was telling customers that they were high-quality\
\ investments.\n\nGoldman Sachs would purchase CDS's from AIG and bet against\
\ CDO's it didn’t own, and got paid when those CDO's failed. Goldman bought at\
\ least $22B in CDS's from AIG, and it was so much that Goldman realized AIG itself\
\ might go bankrupt (which later on it would and the Government had to bail them\
\ out). So Goldman spent $150M insuring themselves against AIG’s potential collapse.\
\ They purchased CDS's against AIG.\n\n[Inside Job \\(2010\\) Payment From AIG\
\ To Goldman Sachs If CDO's Failed](https://preview.redd.it/m54zv03yfi571.png?width=1411&format=png&auto=webp&s=f6cb605b4c9b36c22e60cd8205b80bd6ac770fac)\n\
\nThen in 2007, Goldman went even further. They started selling CDO's specifically\
\ designed so that the more money their customers lost, the more Goldman Sachs\
\ made.\n\nMany other banks did the same. They created shitty CDO's, sold them,\
\ while simultaneously bet that they would fail with CDS's. All of these CDO's\
\ were sold to customers as “safe” investments because of the complicit Rating\
\ Agencies.\n\nThe three rating agencies, Moody’s, S&P and Fitch, made billions\
\ of dollars giving high ratings to these risky securities. Moody’s, the largest\
\ ratings agency, quadrupled its profits between 2000 and 2007. The more AAA's\
\ they gave out, the higher their compensation and earnings were for the quarter.\
\ AAA ratings mushroomed from a handful in 2000 to thousands by 2006. Hundreds\
\ of billions of dollars worth of CDO's were being rated AAA per year. When it\
\ all collapsed and the ratings agencies were called before Congress, the rating\
\ agencies expressed that it was “their opinion” of the rating in order to weasel\
\ their way out of blame. Despite knowing that they were toxic and did not deserve\
\ anything above 'junk' rating.\n\n[Inside Job \\(2010\\) Ratings Agencies Profits](https://preview.redd.it/tto0v644gi571.png?width=1332&format=png&auto=webp&s=f4361dcc23801691d46ec88b241c7d5fa56e2aaf)\n\
\n[Inside Job \\(2010\\) - Insane Increase of AAA Rated CDOs](https://preview.redd.it/91dpnu78gi571.png?width=1259&format=png&auto=webp&s=1f196573f47a757a8bcca8b9e712c537be84cbe2)\n\
\nBy 2008, home foreclosures were skyrocketing. Home buyers in the subprime loans\
\ were defaulting on their payments. Lenders could no longer sell their loans\
\ to the investment banks. And as the loans went bad, dozens of lenders failed.\
\ The market for CDO's collapsed, leaving the investment banks holding hundreds\
\ of billions of dollars in loans, CDO's, and real estate they couldn’t sell.\
\ Meanwhile, those who purchased up CDS's were knocking at the door to be paid.\n\
\nIn March 2008, Bear Stearns ran out of cash and was acquired for $2 a share\
\ by JPMorgan Chase. The deal was backed by $30B in emergency guarantees by the\
\ Fed Reserve. This was just one instance of a bank getting consumed by a larger\
\ entity.\n\n[https:\\/\\/www.history.com\\/this-day-in-history\\/bear-stearns-sold-to-j-p-morgan-chase](https://preview.redd.it/gbgc30vlhi571.png?width=873&format=png&auto=webp&s=74def34d1783c5e3195492913370e6ae65670301)\n\
\nAIG, Bear Stearns, Lehman Bros, Fannie Mae, and Freddie Mac, were all AA or\
\ above rating days before either collapsing or being bailed out. Meaning they\
\ were 'very secure', yet they failed.\n\nThe Fed Reserve and Big Banks met together\
\ in order to discuss bailouts for different banks, and they decided to let Lehman\
\ Brothers fail as well.\n\nThe Government also then took over AIG, and a day\
\ after the takeover, asked the Government for $700B in bailouts for big banks.\
\ At this point in time, \\*\\*the person in charge of handling the financial\
\ crisis, Henry Paulson, former CEO of Goldman Sachs\\*\\*, worked with the chairman\
\ of the Federal Reserve to force AIG to pay Goldman Sachs some of its bailout\
\ money at 100-cents on the dollar. Meaning there was no negotiation of lower\
\ prices. \\*\\*Conflict of interest much?\\*\\*\n\nThe Fed and Henry Paulson\
\ also forced AIG to surrender their right to sue Goldman Sachs and other banks\
\ for fraud.\n\n\\*\\*This is but a small glimpse of the consolidation of power\
\ in big banks from the 2008 crash. They let others fail and scooped up their\
\ assets in the crisis.\\*\\*\n\n\\*\\*After the crash of 2008, big banks are\
\ more powerful and more consolidated than ever before. And the DTC, ICC, OCC\
\ rules are planning on making that worse through the auction and wind-down plans\
\ where big banks can once again consume other entities that default.\\*\\*\n\n\
# 1.7 The Can-Kick To Continue The Game Of Derivative Market Greed\n\nAfter the\
\ crisis, the financial industry worked harder than ever to fight reform. The\
\ financial sector, as of 2010, employed over 3000 lobbyists. More than five for\
\ each member of Congress. Between 1998 and 2008 the financial industry spent\
\ over $5B on lobbying and campaign contributions. And ever since the crisis,\
\ they’re spending even more money.\n\nPresident Barack Obama campaigned heavily\
\ on \"Change\" and \"Reform\" of Wall Street, but when in office, nothing substantial\
\ was passed. But this goes back for decades - the Government has been in the\
\ pocket of the rich for a long time, both parties, both sides, and their influence\
\ through lobbying undoubtedly prevented any actual change from occurring.\n\n\
So their game of playing the derivative market was green-lit to still run rampant\
\ following the 2008 crash and mass bailouts from the Government at the expense\
\ of taxpayers.\n\nThere's now more consolidation of banks, more consolidation\
\ of power, more years of deregulation, and over a decade that they used to continue\
\ the game. And just like in 2008, it's happening again. We're on the brink of\
\ another market crash and potentially a global financial crisis.\n\n# \n\n​\n\
\n# 2. The New CDO Game, And How COVID Uppercut To The System\n\n# 2.1 Abuse Of\
\ Commercial Mortgage Backed Securities\n\nIt's not just /u/atobitt's \"House\
\ Of Cards\" where the US Treasury Market has been abused. It is abuse of many\
\ forms of collateral and securities this time around.\n\nIt's the \\*\\*same\
\ thing\\*\\* as 2008, but much worse due to even higher amounts of leverage in\
\ the system on top of massive amounts of liquidity and potential inflation from\
\ stimulus money of the COVID crisis.\n\nHere's an excerpt from [The Bigger Short:\
\ Wall Street's Cooked Books Fueled The Financial Crisis of 2008. It's Happening\
\ Again](https://theintercept.com/2021/04/20/wall-street-cmbs-dollar-general-ladder-capital/):\n\
\n>A longtime industry analyst has uncovered creative accounting on a startling\
\ scale in the commercial real estate market, in ways similar to the “liar loans”\
\ handed out during the mid-2000s for residential real estate, according to financial\
\ records examined by the analyst and reviewed by The Intercept. A recent, large-scale\
\ academic study backs up his conclusion, \\*\\*finding that banks such as Goldman\
\ Sachs and Citigroup have systematically reported erroneously inflated income\
\ data that compromises the integrity of the resulting securities.\\*\\* \n> \n\
>... \n> \n>The analyst’s findings, first reported by ProPublica last year, are\
\ the subject of a whistleblower complaint he filed in 2019 with the Securities\
\ and Exchange Commission. Moreover, the analyst has identified complex financial\
\ machinations by one financial institution, one that both issues loans and manages\
\ a real estate trust, that may ultimately help one of its top tenants — the low-cost,\
\ low-wage store Dollar General — flourish while devastating smaller retailers.\
\ \n> \n>This time, the issue is not a bubble in the housing market, \\*\\*but\
\ apparent widespread inflation of the value of commercial businesses, on which\
\ loans are based.\\*\\* \n> \n>... \n> \n>\\*\\*Now it may be happening again\\\
*\\* — this time not with residential mortgage-backed securities, based on loans\
\ for homes, \\*\\*but commercial mortgage-backed securities, or CMBS, based on\
\ loans for businesses.\\*\\* And this industrywide scheme is colliding with a\
\ collapse of the commercial real estate market amid the pandemic, which has \\\
*\\*business tenants across the country unable to make their payments.\\*\\*\n\
\nThey've been abusing Commercial Mortgage Backed Securities (CMBS) this time\
\ around, and potentially have still been abusing other forms of collateral -\
\ they might still be hitting MBS as well as treasury bonds per /u/atobitt's DD.\n\
\nJohn M. Griffin and Alex Priest released a study last November. They sampled\
\ almost 40,000 CMBS loans with a market capitalization of $650 billion underwritten\
\ from the beginning of 2013 to the end of 2019. \\*\\*Their findings were that\
\ large banks had 35% or more loans exhibiting 5% or greater income overstatements.\\\
*\\*\n\nThe below chart shows the overstatements of the biggest problem-making\
\ banks. The difference in bars is between samples taken from data between 2013-2015,\
\ and then data between 2016-2019. Almost every single bank experienced a positive\
\ move up over time of overstatements.\n\n>Unintentional overstatement should\
\ have occurred at random times. Or if lenders were assiduous and the overstatement\
\ was unwitting, one might expect it to diminish over time as the lenders discovered\
\ their mistakes. \\*\\*Instead, with almost every lender, the overstatement\\\
*\\* \\*\\*\\*increased\\*\\*\\* \\*\\*as time went on\\*\\*. - [Source](https://theintercept.com/2021/04/20/wall-street-cmbs-dollar-general-ladder-capital/)\n\
\n[https:\\/\\/theintercept.com\\/2021\\/04\\/20\\/wall-street-cmbs-dollar-general-ladder-capital\\\
/](https://preview.redd.it/5xmcu9hwhi571.png?width=846&format=png&auto=webp&s=66f636574bd66afd3512b9587981e4caaa381cf3)\n\
\nSo what does this mean? \\*\\*It means they've once again been handing out subprime\
\ loans (predatory loans). But this time to businesses through Commercial Mortgage\
\ Backed Securities.\\*\\*\n\nJust like Mortgage-Backed Securities from 2000 to\
\ 2007, the loaners will go around, hand out loans to businesses, and rake in\
\ the profits while having no concern over the potential for the subprime loans\
\ failing.\n\n# 2.2 COVID's Uppercut Sent Them Scrambling\n\nThe system was propped\
\ up to fail just like from the 2000-2007 Housing Market Bubble. Now we are in\
\ a speculative bubble of the entire market along with the Commercial Market Bubble\
\ due to continued mass leverage abuse of the world. \n\nHell - also in Crypt0currencies\
\ that were introduced after the 2008 crash. \\*\\*Did you know that you can get\
\ over 100x leverage in crypt0 right now? Imagine how terrifying that crash could\
\ be if the other markets fail.\\*\\*\n\nThere is SO. MUCH. LEVERAGE. ABUSE. IN.\
\ THE. WORLD. All it takes is one fatal blow to bring it all down - \\*\\*and\
\ it sure as hell looks like COVID was that uppercut to send everything into a\
\ death spiral.\\*\\*\n\nWhen COVID hit, many people were left without jobs. Others\
\ had less pay from the jobs they kept. It rocked the financial world and it was\
\ so unexpected. Apartment residents would now become delinquent, causing the\
\ apartment complexes to become delinquent. Business owners would be hurting for\
\ cash to pay their mortgages as well due to lack of business. The subprime loans\
\ all started to become a really big issue.\n\nDelinquency rates of Commercial\
\ Mortgages started to \\*\\*skyrocket\\*\\* when the COVID crisis hit. They even\
\ surpassed 2008 levels in March of 2020. Remember what happened in 2008 when\
\ this occurred? \\*\\*When delinquency rates went up on mortgages in 2008, the\
\ CDO's of those mortgages began to fail. But, this time, they can-kicked it because\
\ COVID caught them all off guard.\\*\\*\n\n[https:\\/\\/theintercept.com\\/2021\\\
/04\\/20\\/wall-street-cmbs-dollar-general-ladder-capital\\/](https://preview.redd.it/cqbceix0ii571.png?width=848&format=png&auto=webp&s=da81781094a31ae1293b019c4e24f68dfdccc634)\n\
\n# 2.3 Can-Kick Of COVID To Prevent CDO's From Defaulting Before Being Ready\n\
\nCOVID sent them \\*\\*Scrambling\\*\\*. They could not allow these CDO's to\
\ fail just yet, because they wanted to get their rules in place to help them\
\ consume other failing entities at a whim. \n\nLike in 2008, they wanted to not\
\ only protect themselves when the nuke went off from these decades of derivatives\
\ abuse, they wanted to be able to scoop up the competition easily. That is when\
\ the DTC, ICC, and OCC began drafting their auction and wind-down plans.\n\n\
In order to buy time, they began tossing out emergency relief \"protections\"\
\ for the economy. Such as preventing mortgage defaults which would send their\
\ CDO's tumbling. \\*\\*This protection ends on June 30th, 2021\\*\\*.\n\nAnd\
\ guess what? \\*\\*Many people are still at risk of being delinquent\\*\\*. [This\
\ article](https://therealdeal.com/issues\\_articles/defusing-the-forbearance-time-bomb/)\
\ was posted just \\*\\*yesterday\\*\\*. The moment these protection plans lift,\
\ we can see a surge in foreclosures as delinquent payments have accumulated over\
\ the past year.\n\nWhen everyone, including small business owners who were attacked\
\ with predatory loans, begin to default from these emergency plans expiring,\
\ it can lead to the CDO's themselves collapsing. \\*\\*Which is exactly what\
\ triggered the 2008 recession\\*\\*.\n\n[https:\\/\\/www.housingwire.com\\/articles\\\
/mortgage-forbearance-drops-as-expiration-date-nears\\/](https://preview.redd.it/b68fsf5aii571.png?width=945&format=png&auto=webp&s=daa8c725185480d988802023a27291ee782b5c5f)\n\
\n# 2.4 SLR Requirement Exemption - Why The Reverse Repo Is Blowing Up\n\nAnother\
\ big issue exposed from COVID is when SLR requirements were leaned during the\
\ pandemic. They had to pass a quick measure to protect the banks from defaulting\
\ in April of 2020.\n\n>In a brief announcement, the Fed said it would allow a\
\ change to the \\*\\*supplementary leverage ratio to expire March 31\\*\\*. The\
\ initial move, announced April 1, 2020, \\*\\*allowed banks to exclude Treasurys\
\ and deposits with Fed banks from the calculation of the leverage ratio\\*\\\
*. - [Source](https://www.cnbc.com/2021/03/19/the-fed-will-not-extend-a-pandemic-crisis-rule-that-had-allowed-banks-to-relax-capital-levels.html)\n\
\nWhat can you take from the above? \n\n\\*\\*SLR is based on the banks deposits\
\ with the Fed itself. It is the treasuries and deposits that the banks have on\
\ the Fed's balance sheet. Banks have an 'account block' on the Fed's balance\
\ sheet that holds treasuries and deposits. The SLR pandemic rule allowed them\
\ to neglect these treasuries and deposits from their SLR calculation, and it\
\ boosted their SLR value, allowing them to survive defaults.\\*\\*\n\nThis is\
\ a \\*\\*big\\*\\*, \\*\\*big\\*\\*, \\*\\*BIG\\*\\* sign that \\*\\*the banks\
\ are way overleveraged by borrowing tons of money just like in 2008.\\*\\*\n\n\
The SLR is the \"Supplementary Leverage Ratio\" and they enacted quick to allow\
\ it so banks wouldn't fail under mass leverage for failing to maintain enough\
\ equity.\n\n>The supplementary leverage ratio is the US implementation of the\
\ Basel III Tier 1 \\*\\*leverage ratio\\*\\*, with which \\*\\*banks calculate\
\ the amount of common equity capital they must hold relative to their total leverage\
\ exposure\\*\\*. \\*\\*Large US banks must hold 3%\\*\\*. \\*\\*Top-tier bank\
\ holding companies must also hold an extra 2% buffer, for a total of 5%\\*\\\
*. The SLR, which does not distinguish between assets based on risk, is conceived\
\ as a backstop to risk-weighted capital requirements. - [Source](https://www.risk.net/definition/supplementary-leverage-ratio-slr)\n\
\n[Here is an exposure of their SLR](https://www.fool.com/investing/2020/07/26/which-of-the-large-us-banks-is-most-leveraged.aspx)\
\ from earlier this year. The key is to have \\*\\*high SLR, above 5%, as a top-tier\
\ bank\\*\\*:\n\n|Bank|Supplementary Leverage Ratio (SLR)|\n|:-|:-|\n|JP Morgan\
\ Chase|6.8%|\n|Bank Of America|7%|\n|Citigroup|6.7%|\n|Goldman Sachs|6.7%|\n\
|Morgan Stanley|7.3%|\n|Bank of New York Mellon|8.2%|\n|State Street|8.3%|\n\n\
The SLR protection ended on March 31, 2021. Guess what started to happen just\
\ after?\n\nT\\*\\*he reverse repo market started to explode. This is VERY unusual\
\ behavior because it is not at a quarter-end where quarter-ends have significant\
\ strain on the economy. The build-up over time implies that there is significant\
\ strain on the market AS OF ENTERING Q2 (April 1st - June 30th).\\*\\*\n\n[https:\\\
/\\/fred.stlouisfed.org\\/series\\/RRPONTSYD](https://preview.redd.it/ijp4wkxdii571.png?width=1455&format=png&auto=webp&s=46f67d7efcc98ee475ba27fa41850fbf5d894064)\n\
\n\\*\\*Speculation: SLR IS DEPENDENT ON THEIR DEPOSITS WITH THE FED ITSELF. THEY\
\ NEED TO EXTRACT TREASURIES OVER NIGHT TO KEEP THEM OFF THE FED'S BALANCE SHEETS\
\ TO PREVENT THEMSELVES FROM FAILING SLR REQUIREMENTS AND DEFAULTING DUE TO MASS\
\ OVERLEVERAGE. EACH BANK HAS AN ACCOUNT ON THE FED'S BALANCE SHEET, WHICH IS\
\ WHAT SLR IS CALCULATED AGAINST. THIS IS WHY IT IS EXPLODING. THEY ARE ALL STRUGGLING\
\ TO MEET SLR REQUIREMENTS.\\*\\*\n\n# 2.5 DTC, ICC, OCC Wind-Down and Auction\
\ Plans; Preparing For More Consolidation Of Power\n\nWe've seen some interesting\
\ rules from the DTC, ICC, and OCC. For the longest time we thought this was all\
\ surrounding GameStop. Guess what. \\*\\*They aren't all about GameStop\\*\\\
*. Some of them are, but not all of them.\n\n\\*\\*They are furiously passing\
\ these rules because the COVID can-kick can't last forever. The Fed is dealing\
\ with the potential of runaway inflation from COVID stimulus and they can't allow\
\ the overleveraged banks to can-kick any more. They need to resolve this as soon\
\ as possible. June 30th could be the deadline because of the potential for CDO's\
\ to begin collapsing.\\*\\*\n\nLet's revisit a few of these rules. The most important\
\ ones, in my opinion, because they shed light on the bullshit they're trying\
\ to do once again: Scoop up competitors at the cheap, and protect themselves\
\ from defaulting as well.\n\n\\* \\*\\*DTC-004:\\*\\* Wind-down and auction plan.\
\ - [Link](https://www.sec.gov/rules/sro/dtc/2021/34-91429.pdf)\n\\* \\*\\*ICC-005:\\\
*\\* Wind-down and auction plan. - [Link](https://www.sec.gov/rules/sro/icc/2021/34-91806.pdf)\n\
\\* \\*\\*OCC-004:\\*\\* Auction plan. Allows third parties to join in. - [Link](https://www.sec.gov/rules/sro/occ/2021/34-91935.pdf)\n\
\\* \\*\\*OCC-003\\*\\*: Shielding plan. Protects the OCC. - [Link](https://www.sec.gov/rules/sro/occ/2021/34-92038.pdf)\n\
\nEach of these plans, in brief summary, allows each branch of the market to protect\
\ themselves in the event of major defaults of members. They also \\*\\*allow\
\ members to scoop up assets of defaulting members\\*\\*.\n\nWhat was that? Scooping\
\ up assets? \\*\\*In other words it is more concentration of power\\*\\*. \\\
*\\*Less competition\\*\\*.\n\nI would not be surprised if many small and large\
\ Banks, Hedge Funds, and Financial Institutions evaporate and get consumed after\
\ this crash and we're left with just a select few massive entities. That is,\
\ after all, exactly what they're planning for.\n\nThey could not allow the COVID\
\ crash to pop their massive speculative derivative bubble so soon. It came too\
\ sudden for them to not all collapse instead of just a few of them. It would\
\ have obliterated the entire economy even more so than it will once this bomb\
\ is finally let off. They needed more time to prepare so that they could feast\
\ when it all comes crashing down.\n\n# 2.6 Signs Of Collapse Coming - ICC-014\
\ - Incentives For Credit Default Swaps\n\nA comment on this subreddit made me\
\ revisit a rule passed by the ICC. It flew under the radar and is another sign\
\ for a crash coming.\n\nThis is [ICC-014](https://www.sec.gov/rules/sro/icc/2021/34-91922.pdf).\
\ Passed and effective as of June 1st, 2021.\n\nSeems boring at first. Right?\
\ That's why it flew under the radar?\n\nBut now that you know the causes of the\
\ 2008 market crash and how toxic CDO's were packaged together, and then CDS's\
\ were used to bet against those CDO's, check out what ICC-014 is doing \\*\\\
*as of June 1st\\*\\*.\n\n[ICC-014 Proposed Discounts On Credit Default Index\
\ Swaptions](https://preview.redd.it/phrxcouvii571.png?width=731&format=png&auto=webp&s=469560cf06458b51b1b5439d84062e9f6e04bda4)\n\
\n\\*\\*They are providing incentive programs to purchase Credit Default Swap\
\ Indexes. These are like standard CDS's, but packaged together like an index.\
\ Think of it like an index fund.\\*\\*\n\n\\*\\*This is allowing them to bet\
\ against a wide range of CDO's or other entities at a cheaper rate. Buyers can\
\ now bet against a wide range of failures in the market. They are allowing upwards\
\ of 25% discounts.\\*\\*\n\nThere's many more indicators that are pointing to\
\ a market collapse. But I will leave that to you to investigate more. Here is\
\ quite a scary compilation of charts relating the current market trends to the\
\ crashes of Black Monday, The Internet Bubble, The 2008 Housing Market Crash,\
\ and Today.\n\n[Summary of Recent Warnings Re Intermediate Trend In Equities](https://preview.redd.it/y4reiv86hi571.jpg?width=550&format=pjpg&auto=webp&s=8845b7b90adf28409772483c6eeeef1763bbaaaf)\n\
\n​\n\n​\n\n​\n\n# 3. The Failure Of The 1% - How GameStop\
\ Can Deal A Fatal Blow To Wealth Inequality\n\n# 3.1 GameStop Was Never Going\
\ To Cause The Market Crash\n\nGameStop was meant to die off. The rich bet against\
\ it many folds over, and it was on the brink of Bankruptcy before many conditions\
\ led it to where it is today.\n\nIt was never going to cause the market crash.\
\ And it never will cause the crash. The short squeeze is a result of high abuse\
\ of the derivatives market over the past decade, where Wall Street's abuse of\
\ this market has primed the economy for another market crash on their own.\n\n\
We can see this because when COVID hit, GameStop was a non-issue in the market.\
\ The CDO market around CMBS was about to collapse on its own because of the instantaneous\
\ recession which left mortgage owners delinquent.\n\nIf anyone, be it the media,\
\ the US Government, or others, try to blame this crash on GameStop or anything\
\ \\*\\*other than the Banks and Wall Street\\*\\*, \\*\\*they are WRONG.\\*\\\
*\n\n# 3.2 The Rich Are Trying To Kill GameStop. They Are Terrified\n\nIn January,\
\ the SI% was reported to be 140%. But it is very likely that it was \\*\\*underreported\
\ at that time\\*\\*. Maybe it was 200% back then. 400%. 800%. Who knows. From\
\ the above you can hopefully gather that Wall Street \\*\\*takes on massive risks\
\ all the time, they do not care as long as it churns them short-term profits\\\
*\\*. There is loads of evidence pointing to shorts never covering by hiding their\
\ SI% through malicious options practices, and manipulating the price every step\
\ of the way.\n\nThe conditions that led GameStop to where it is today is a miracle\
\ in itself, and the support of retail traders has led to expose a fatal mistake\
\ of the rich. \\*\\*Because a short position has infinite loss potential\\*\\\
*. There is SO much money in the world, especially in the derivatives market.\n\
\nThis should scream to you that any price target that \\*\\*you\\*\\* think is\
\ low, could very well be extremely low in \\*\\*YOUR\\*\\* perspective. You might\
\ just be accustomed to thinking \"$X price floor is too much money. There's no\
\ way it can hit that\". I used to think that too, until I dove deep into this\
\ bullshit.\n\nThe market crashing no longer was a matter of simply scooping up\
\ defaulters, their assets, and consolidating power. The rich now have to worry\
\ about the potential of \\*\\*infinite\\*\\* losses from GameStop and possibly\
\ other meme stocks with high price floor targets some retail have.\n\nIt's not\
\ a fight against Melvin / Citadel / Point72. \\*\\*It's a battle against the\
\ entire financial world\\*\\*. There is even speculation from multiple people\
\ that the Fed is even being complicit right now in helping suppress GameStop.\
\ \\*\\*Their whole game is at risk here.\\*\\*\n\n\\*\\*Don't you think they'd\
\ fight tooth-and-nail to suppress this and try to get everyone to sell?\\*\\\
*\n\n\\*\\*That they'd pull every trick in the book to make you think that they've\
\ covered?\\*\\*\n\nThe amount of money they could lose is unfathomable.\n\nWith\
\ the collapsing SI%, it is mathematically impossible for the squeeze to have\
\ happened - its mathematically impossible for them to have covered. /u/atobitt\
\ also discusses this in [House of Cards Part 2](https://www.reddit.com/r/Superstonk/comments/nlwaxv/house\\\
_of\\_cards\\_part\\_2/).\n\n[https:\\/\\/www.thebharatexpressnews.com\\/short-squeeze-could-save-gamestop-investors-a-third-time\\\
/](https://preview.redd.it/6hge0pxfhi571.png?width=871&format=png&auto=webp&s=aab736cc279cc727524d2cf96384ea3e33109250)\n\
\nAnd in regards to all the other rules that look good for the MOASS - I see them\
\ in a negative light.\n\nThey are passing NSCC-002/801, DTC-005, and others,\
\ in order to prevent a GameStop situation from \\*\\*ever\\*\\* occurring again.\n\
\nThey realized how much power retail could have from piling into a short squeeze\
\ play. These new rules will snap new emerging short squeezes instantly if the\
\ conditions of a short squeeze ever occur again. There will \\*\\*never\\*\\\
* be a GameStop situation after this.\n\nIt's their game after all. They've been\
\ abusing the derivative market game for decades and GameStop is a huge threat.\
\ It was supposed to be, \"crash the economy and run with the money\". Not \"\
crash the economy and pay up to retail\". But GameStop was a flaw exposed by their\
\ greed, the COVID crash, and the quick turn-around of the company to take it\
\ away from the brink of bankruptcy. \n\nThe rich are now at risk of losing that\
\ money and insane amounts of cash that they've accumulated over the years from\
\ causing the Internet Bubble Crash of 2000, and the Housing Market Crash of 2008.\n\
\nSo, yeah, I'm going to be fucking greedy."
- source_sentence: 'That''s it. Contact the Securities and Exchange Commission in
the United States. This is your money and they aren''t giving it back. They have
an online complaint form. Also ask for \*\*upvote\*\*s for visibility. This has
to be one of the worst Poloniex stories I''ve heard so far. '
sentences:
- 'So my grandparents gave me 20 lakhs FD and told me to do what I want with it.
I know most kids would just spend it, but my thought was to invest like in Reliance,
Infosys or something.
Can anyone tell me should I go all stocks or some mutual funds as well.
I am 18 if you are wondering.
I have a Zerodha account.
Edit : Thanks for your opinions.'
- "TLDR: Left some ETH in poloniex when they suspended service in my state. When\
\ they didnt un-suspend service when laws changed, I asked for my coins back.\
\ They balked at helping me. Its been 1 month, not sure if they ever plan on answering\
\ and I'm out 270 ETH (+ other alts).\n-------------------------------------------------------------------------------------------------------\n\
\nEDIT: As another user as suggested, I'd really appreciate the upvotes for visibility.\
\ I hope that this benefits the greater crypto community by educating on the dangers\
\ of keeping your funds in an exchange. I also hope this serves as a huge red\
\ flag for poloniex users and prospective users. This is how much they care about\
\ you (not even enough to have a real person answer your multiple tickets and\
\ emails).\n-------------------------------------------------------------------------------------------------------\n\
\nDOUBLE EDIT: Last night I received a message from poloniex that my ETH withdraw\
\ has been initated! Finally got my eth back, thanks to everyone who upvoted,\
\ I know that this thread is the only reason I got my funds back. Unfortunately,\
\ the Monero and Bitcoin that I had in poloniex were not withdrawn (despite having\
\ specified the deposit addresses and balance with “decimal precision”). I’m hoping\
\ that these withdraws are also processed, but I’m just thankful that my ETH was\
\ returned. Although I was planning to, I have not contacted the SEC or any other\
\ regulatory body. From responses on this thread, I realize how many other people\
\ are in a similar boat. I hope that poloniex finds a way to increase its capacity\
\ to meet customer demand.\n-------------------------------------------------------------------------------------------------------\n\
\n\nI am writing this to make people aware of just how bad customer service has\
\ gotten on Poloniex. I have been a customer and user of poloniex since 2015.\
\ Most of my eth that I now own was traded for on their exchange. I live in New\
\ Hampshire, USA which means back in ~October 2016 my account was \"temporarily\
\ suspended\" due to local laws (which have long since been overruled).\n\nPoloniex\
\ announced a couple weeks in advance that they were going to be shutting down\
\ service to NH, so I prepared by withdrawing as much of my funds as possible.\
\ I was able to get most off without a problem but there was still a handful (~270)\
\ of ETH that I was unable to withdraw before the window closed. At the time I\
\ wasnt worried. The note from poloniex stated:\n\n>You will receive an email\
\ with instructions. In brief, you will have until October 6, 2016 (two weeks\
\ from the date of this notice) to close any open orders and withdraw your funds.\
\ If you aren't able to locate this email, please contact Support for assistance.\n\
>Although your account will be suspended, your data will remain on file. If you\
\ attempt to log in, you will be restricted to areas only for viewing and exporting\
\ historical data. When we resume operations in New Hampshire, you will be able\
\ to log back into your account with all of your historical data and any remaining\
\ balances intact.\n>Our legal team is working closely with the State of New Hampshire\
\ Banking Department and other regulatory agencies to verify that changes in their\
\ statutes apply to the services offered by Poloniex and to seek licenses where\
\ necessary. This is a nascent industry; as the regulations around it mature,\
\ these types of service disruptions may not be entirely avoidable, but we have\
\ been and will continue to be proactive in educating regulators and monitoring\
\ both existing laws and upcoming changes to these laws so that we can limit interruptions\
\ wherever possible.\n\n\nI assumed that as soon as the laws changed (which they\
\ did in January), poloniex would re-enable service to NH and I would be able\
\ to access and continue to trade my coins at that time.\n\nFast forward to May\
\ 23 (one month ago)...Despite the NH law changes, poloniex still had not re-enabled\
\ my account. The 270 eth that were worth ~$3,200 USD in October are now worth\
\ ~$89,100. I decided enough was enough, I needed to take my eth out of poloniex\
\ once and for all. As instructed, I contacted support by opening a ticket explaining\
\ my situation. I waited over a week and recieved no human response from poloniex.\
\ I submitted a new ticket 8 days after I submitted the first, this time I used\
\ a different label on the ticket. On submitting this ticket I recieved this automated\
\ email:\n\n>Before closing your Poloniex account you have a chance to provide\
\ proof of residency outside of the suspended Country/Estate.\n>If you would like\
\ to submit proof of residency outside of the suspended Country/Estate please\
\ reply to this message and ask for an agent to begin the process of updating\
\ your Poloniex profile, do not send any document via email or ticket, the support\
\ agent will guide you through uploading the necessary files to your Poloniex\
\ profile.\n>If you really is a resident of the suspended Country/Estate or prefer\
\ to close the account, please login to your Poloniex account and reply to this\
\ message providing the exact amount you have on balance of each currency, with\
\ decimal precision, and an address for the final withdrawal to be processed for\
\ each.\n>Please provide an address for each of the currencies, Poloniex may not\
\ trade in your behalf.\n\n\nI provided all the information they needed that same\
\ day, waited another week, still no human response.\n\nOn June 6th, I submitted\
\ another ticket stating that I really needed to withdraw these funds and that\
\ I had still not recieved any human response despite fulfilling all automated\
\ requests.\n\n\nI waited 5 more days before I finally got message from what appears\
\ to have been a human:\n\n>I do sincerely apologize for the late reply, due to\
\ regulations we are unable to process the withdrawal of the whole balance at\
\ once unless you submit your full SSN number, i have cleared just cleared the\
\ SSN field on your profile, please let me know If/when you upload it and i will\
\ send it to verification immediately\n >If you prefer not to complete the profile\
\ page, your account will be re opened in a limited manner so you can withdrawal\
\ the coins yourself at a rate of less than $2000 United States Dollar equivalent\
\ per day.\n>Best regards, \n>Christopher Bologna\n>Poloniex Support\n\nThat same\
\ day, I logged in, updated my SSN field and replied to the ticket letting them\
\ know I had done so. I recieved this response from the same agent:\n\n>Thank\
\ you for entering your SSN, we can now proceed with your full withdrawal,Please\
\ log into your Poloniex account and let me know exactly how much you have on\
\ balance of each currency, with decimal precision, and address to send the coins\
\ to.\n\nDespite having already provided this info in a previous ticket, I provided\
\ it all again. The email response I got (June 10):\n\n>I will now forwarding\
\ you ticket to a agent that will assist in withdrawing your coins and i am afraid\
\ there is no time frame for re allowing NH customers,if you need quick access\
\ to the coins you wont be able to access them so it is strongly recommended that\
\ you send the coins to a wallet you control\n\nAnd that was the last I heard\
\ from them on this ticket, no one has touched it.\n\nTwo days ago (June 20),\
\ I created yet another ticket letting them know that I had recieved no help on\
\ my previous tickets and that I really needed my funds from them. I also told\
\ them I planned on creating a new ticket each day that I did not recieve a response\
\ (which I have).\n\nAnyhow I'm starting to get a little dejected. I am not sure\
\ I'll ever see my ETH again. I'm wondering if I should start seeking legal counsel?\
\ Would it be worth the money and effort just to extract 270 ETH that was mine\
\ in the first place? If anyone has any ideas I'm happy to hear them. I just want\
\ this to be yet another warning to you all, get your coins off exchanges, especially\
\ ones like poloniex who seem to get shadier each day."
- "Seriously, obviously the number of members is also growing here, but somehow\
\ we escaped the MASSIVE influx of noobs and mainstream people caused by the short\
\ squeeze event and the global media attention that was put around wsb. \n\nI\
\ feel like this is one of the last bastions of what wsb once was, a time capsule\
\ of the culture that we had before everything changed and was completely ruined,\
\ at least in my opnion. And for this I'll say: cherish this sub. Do not mention\
\ it too much outside, because it might go down the same tragic path of being\
\ just a bunch of memes and total monopolization."
- source_sentence: "Start with a billion, and invest it poorly. \n\nSeriously though,\
\ just invest as much as you can into total market index funds. Start with tax\
\ advantaged accounts, and work your way up from there. I started investing about\
\ 8 years ago, when my wife and I made a combine maybe $60k a year. We've worked\
\ our way up to to around $150k a year. Current net worth is around $400k and\
\ growing. Compound interest is the most powerful force on earth."
sentences:
- 'The moderators there have made that sub private before. That’s why this sub was
created. It’ll probably open back up soon. Calm down.
Edit: It''s open again. Told you guys.'
- The new car is on avg. $40,000, and the homes people buy are usually way above
their pay grade. I see people making minimum wage buying a PS5 and fast food and
unlimited data, etc. I make alright money and am frugal, no debt, and still I'm
struggling to plan for kids, a home, and retirement. Is everyone just in massive
debt? Is this sustainable or will it cause another crash?
- Hey is anyone in here a millionaire or ever made a million dollars? What’s your
advice on how to make a million dollars? Obviously I could just save my money
for a long time and have a million in like 25 years or longer but what’s advice
on how to make a million dollars in like 10 years? I’m 25 years old and am 6 months
in to electrician apprentice
pipeline_tag: sentence-similarity
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.63
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.71
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.74
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.20999999999999996
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14199999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07399999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.38
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.63
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.71
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.74
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5691344814006021
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5131388888888889
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5248933367671109
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.4
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.71
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.76
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19999999999999993
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14199999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07599999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.71
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.76
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.580101489391867
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5220833333333332
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5321698927355666
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.37
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.58
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.66
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.76
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.37
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19333333333333327
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.13199999999999995
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07599999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.37
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.58
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.66
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.76
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5567719171210964
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.49267460317460327
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5028075785046933
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.53
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.65
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.72
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1766666666666666
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07199999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.38
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.53
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.65
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.72
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5382497269154309
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4811071428571429
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4941611021001552
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.3
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.47
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.56
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.66
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.15666666666666668
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11199999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06599999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.47
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.56
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.66
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.47068112214982427
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.41118650793650796
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.42531289203527706
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("phamkinhquoc2002/bge-base-financial-matryoshka_test")
# Run inference
sentences = [
"Start with a billion, and invest it poorly. \n\nSeriously though, just invest as much as you can into total market index funds. Start with tax advantaged accounts, and work your way up from there. I started investing about 8 years ago, when my wife and I made a combine maybe $60k a year. We've worked our way up to to around $150k a year. Current net worth is around $400k and growing. Compound interest is the most powerful force on earth.",
'Hey is anyone in here a millionaire or ever made a million dollars? What’s your advice on how to make a million dollars? Obviously I could just save my money for a long time and have a million in like 25 years or longer but what’s advice on how to make a million dollars in like 10 years? I’m 25 years old and am 6 months in to electrician apprentice',
"The new car is on avg. $40,000, and the homes people buy are usually way above their pay grade. I see people making minimum wage buying a PS5 and fast food and unlimited data, etc. I make alright money and am frugal, no debt, and still I'm struggling to plan for kids, a home, and retirement. Is everyone just in massive debt? Is this sustainable or will it cause another crash?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.38 |
| cosine_accuracy@3 | 0.63 |
| cosine_accuracy@5 | 0.71 |
| cosine_accuracy@10 | 0.74 |
| cosine_precision@1 | 0.38 |
| cosine_precision@3 | 0.21 |
| cosine_precision@5 | 0.142 |
| cosine_precision@10 | 0.074 |
| cosine_recall@1 | 0.38 |
| cosine_recall@3 | 0.63 |
| cosine_recall@5 | 0.71 |
| cosine_recall@10 | 0.74 |
| cosine_ndcg@10 | 0.5691 |
| cosine_mrr@10 | 0.5131 |
| **cosine_map@100** | **0.5249** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4 |
| cosine_accuracy@3 | 0.6 |
| cosine_accuracy@5 | 0.71 |
| cosine_accuracy@10 | 0.76 |
| cosine_precision@1 | 0.4 |
| cosine_precision@3 | 0.2 |
| cosine_precision@5 | 0.142 |
| cosine_precision@10 | 0.076 |
| cosine_recall@1 | 0.4 |
| cosine_recall@3 | 0.6 |
| cosine_recall@5 | 0.71 |
| cosine_recall@10 | 0.76 |
| cosine_ndcg@10 | 0.5801 |
| cosine_mrr@10 | 0.5221 |
| **cosine_map@100** | **0.5322** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.37 |
| cosine_accuracy@3 | 0.58 |
| cosine_accuracy@5 | 0.66 |
| cosine_accuracy@10 | 0.76 |
| cosine_precision@1 | 0.37 |
| cosine_precision@3 | 0.1933 |
| cosine_precision@5 | 0.132 |
| cosine_precision@10 | 0.076 |
| cosine_recall@1 | 0.37 |
| cosine_recall@3 | 0.58 |
| cosine_recall@5 | 0.66 |
| cosine_recall@10 | 0.76 |
| cosine_ndcg@10 | 0.5568 |
| cosine_mrr@10 | 0.4927 |
| **cosine_map@100** | **0.5028** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.38 |
| cosine_accuracy@3 | 0.53 |
| cosine_accuracy@5 | 0.65 |
| cosine_accuracy@10 | 0.72 |
| cosine_precision@1 | 0.38 |
| cosine_precision@3 | 0.1767 |
| cosine_precision@5 | 0.13 |
| cosine_precision@10 | 0.072 |
| cosine_recall@1 | 0.38 |
| cosine_recall@3 | 0.53 |
| cosine_recall@5 | 0.65 |
| cosine_recall@10 | 0.72 |
| cosine_ndcg@10 | 0.5382 |
| cosine_mrr@10 | 0.4811 |
| **cosine_map@100** | **0.4942** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3 |
| cosine_accuracy@3 | 0.47 |
| cosine_accuracy@5 | 0.56 |
| cosine_accuracy@10 | 0.66 |
| cosine_precision@1 | 0.3 |
| cosine_precision@3 | 0.1567 |
| cosine_precision@5 | 0.112 |
| cosine_precision@10 | 0.066 |
| cosine_recall@1 | 0.3 |
| cosine_recall@3 | 0.47 |
| cosine_recall@5 | 0.56 |
| cosine_recall@10 | 0.66 |
| cosine_ndcg@10 | 0.4707 |
| cosine_mrr@10 | 0.4112 |
| **cosine_map@100** | **0.4253** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 100 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 42 tokens</li><li>mean: 181.37 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 297.07 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| positive | anchor |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>(relix already hit on some of this)<br><br>It's hard to explain this to a five-year-old, because there are some fairly abstract concepts involved, but here goes... <br><br>All actual "money" is debt. All of it, including monetary gold, etc. (Don't argue with me yet, I'll get to that.)<br><br>Imagine a pretend world with no money, some kind of primitive villiage or something. Now let's invent paper money. You can't just print a bunch of paper that says people have to give you stuff, because nobody would honor it. But you \*could\* print IOUs. Let's walk through this...<br><br>- Let's say you're an apple-farmer and I'm a hunter. You want some meat but haven't harvested your crops yet. You say to me, "hey, go hunt me some meat and I'll give you 1/10th of my apple harvest in the fall". Fair enough, I give you meat, you owe me apples. There's probably a lot of this kind of stuff going on, in addition to normal barter. In time, standard "prices" start to emerge: a deer haunch is worth a bushel of apples, or whatever. <br><br>- Now, let's say a week later, I realize that my kid needs a new pair of shoes more than I need a bushel of apples. I come back to you and say, "Hey remember that bushel of apples you owe me? Could you write a marker, redeemable for one bushel of apples, that I can give to the shoemaker in trade for a pair of shoes?" You say okay, and we have invented a \*transferable note\*, something a lot like money. <br><br>- In time, our little villiage starts to figure out that a note redeemable for a bushel of apples can be swapped for all kinds of things. The fisherman who doesn't even like apples will accept apple-certificates in trade for fish, because he knows he can trade them to boat-builder who loves apples. In time, you can even start to hire farm-workers without giving them anything except a note promising a cut of the future harvest. <br><br>Now, you are issuing \*debt\*: a promise to provide apples. The "money" is a transferable IOU-- your workers get a promise to provide value equal to a day of farm-work, or whatever, and it's transferrable, so they can use it to buy whatever they want. The worker gets fish from the fisherman, not in exchange for doing any work or giving him anything he can use, but in exchange for an IOU that the fisherman can redeem anywhere. <br><br>So far so good. But there are a couple of forks in the road here, on the way to a realistic monetary system, that we'll address separately:<br><br>- What happens if your apple orchard is destroyed in a wildfire? Suddenly all the notes that everyone has been trading are basically wiped out. It didn't "go" anywhere, it's just gone, it doesn't exist. Real value was genuinely destroyed. There is no thermodynamic law of the conservation of monetary value-- just as you and I created it by creating transferable debt, it can also be genuinely destroyed. (We'll get back to this in a minute, it gets interesting). <br><br>- The second issue is that, in all probability, the whole town is not \*just\* trading apple-certificates. I could also issue promises to catch deer, the fisherman could issue promises of fish, and so on. This could get pretty messy, especially if you got the notion to issue more apple-certificates than you can grow: you could buy all kinds of stuff with self-issued debt that you could never repay, and the town wouldn't find out until harvest-time comes. Once again, value has been "destroyed" people worked and made stuff and gave you stuff in exchange for something that doesn't exist, and will never exist. All that stuff they made is gone, you consumed it, and there is nothing to show for it.<br><br>The above two concerns are likely to become manifest in our village sooner or later, and probably sooner. This leads to the question of \*credit\*, which is, at its most basic, a measure of \*credibility\*. Every time you issue an apple-certificate, you are \*borrowing\*, with a promise to repay from future apple-harvests. <br><br>After the first couple of town scandals, people will start taking a closer look at the credibility of the issuer. Let's say the town potato-farmer comes up with a scheme where his potato-certificates are actually issued by some credible third-party, say the town priest or whatever, who starts every growing season with a book of numbered certificates equal to the typical crop-yield and no more, and keeps half of the certificate on file, issuing the other half. Now there is an audit trail and a very credible system that is likely to earn the potato-grower a lot of credit, compared to other farmers in town. That means that the potato-grower can probably issue more notes at a better exchange rate than some murkier system. Similarly, the town drunk probably won't get much value for his certificates promising a ship of gold. <br><br>Now we have something like a credit market emerging, and the potato-farmer is issuing something closer to what we might call a modern "bond"...<br><br>(continued in a reply to this post...)<br><br></code> | <code>Honest question.<br><br>Where is all the money? I hear nothing but bad news about financial crisis all over the world, and it seems that there is a shortage of cash - like it is some sort of natural resource.<br><br>People haven't stopped buying stuff. They still need food, clothing, medicine, shelter. Taxes are still collected. Fines are still levied. <br><br>So where is all the money? I mean, labor has been produced to make things and wages paid to the laborers. The things are purchased by other laborers, who were paid for producing goods or services, etc. It's a closed loop, right? <br><br>Can someone explain it like I'm five or something?</code> |
| <code>I someone were to make a good alternative then I'd be very happy about it. It can take ages moderating this sub. I'm sure lots of the other mods think the same.<br><br>The fundamental problem is though that loads of people who don't know about economic write replies. All sorts of bullshit gets written. The problem then is you'd have to know about economics to distinguish the bullshit from the truth.<br><br>If someone can think of a good way of solving this I'd be very happy.</code> | <code>So often someone will ask an amazing question, something I’m really interested in getting a good answer to, or even someone’s opinion, but I always just see that message explaining that comments need to be approved. <br><br>99.9% is an exaggeration, but not many people are going to come back to look at a post to see if any comments have been approved.</code> |
| <code>So I said I would talk about the US Military if this got any interest. Here goes:<br><br>The US Department of Defense (hereafter DOD) has put in place a ton of procedural protections to stave off corruption. And God knows they need protection: only in the DOD can you find a 20-something purchasing officer who knows nothing about the stuff he's buying, who makes around $30k per year, and who is in charge of a half-billion-dollar budget. <br><br>For starters, low-paid people with large purchasing budgets are the easiest to corrupt outright. Find someone makes $30,000 per year but who has a $10m budget, and you have struck gold: it doesn't even require outright bribery. <br><br>Just show up at their office and mention that you might have some product for them to take a look at... "Can you spare some time this weekend? I have tickets to the playoffs if you're free... Whoa!? You're a fisherman? Let's forget about business: why not have the family come by the beach house? I just got a new boat and the stripers are running... we'll talk business later..."<br><br>Take a guy living in a military-base trailer out fishing on a yacht or to courtside seats, take him on a golf weekend, or to front-row seats at an A-list concert, hell, even just take him and his lady to a swank restaurant, and you've made a new best friend. And if he happens to be in charge of a $10m budget, that lavish night might be about to pay for itself 100,000 times over. <br><br>And all that assumes that you did not actually have a stripper with a cell-phone camera waiting in the car after the concert... we haven't even talked about blackmail, so why bring it up? Especially considering that these days, you don't even have to blackmail someone to blackmail them-- just linking your pics to their facebook, or setting up a "my party with Joe Blow" web page can ruin their life without malice or legal consequence... We're just posting our own party pics!<br><br>The DOD grades proposals with a color-grading system that is basically equivalent to letter grades. <br><br>The way it works is: the purchasing officer or whomever writes the spec ("request for quote"-- in normal business this called a "request for proposal" or "RFP". The DOD calls it an "RFQ". Whatever.). The spec is written as numbered sentences/paragraphs. Companies write bids that answer each number, with a bottom-line price. <br><br>A technical review committee sees the proposals with the price and supplier blacked out, and "grades" each proposal based on how well it meets the spec. The purchasing officer then sees the "grades" from the technical review, with the prices alongside (but not the complete proposals). Depending on his instructions, he may be required to either sign for the best overall value, highest overall grade, lowest acceptable cost, etc. <br><br>All of this seems very official and corruption-proof, until you realize that the original request for proposal came from, say, a 65-year-old Naval Admiral who knows everything about Oceanic warfare but nothing at all about computers, who assigned his 20-something first mate to write the spec and request for funding, who knows nothing about purchasing and who in turn wrote a spec (two years ago) that required Core2duo computers with 2GB ram and Windows XP and who required computers that meet the spec... <br><br>By the time Congress approves the funding, the spec is obsolete, and it costs far \*more\* to buy a bunch of obsolete Core2Duo machines with 2GB RAM than it would have cost to buy more-powerful computers at Costco. <br><br>The over-technicality and protectiveness of the DOD actually makes it one of the most vulnerable purchasing systems anywhere. As a technical officer who was interested in my product told me: "Don't worry about the review process, we'll just let you guys write the spec". If the military wants a Mercedes, they just issue a spec that requires a hood ornament with three lines trisecting a circle, and see whichever car company meets the spec at the best price-- surprise! They get the contract. Which means that the DOD is probably the only buyer in the world paying sticker price. </code> | <code>This is a throwaway account (I'm a longtime redditor under another login). /r/economics might not be the correct place to put this, but it was the best I could think of. I'm a mid-career guy in a business that does a lot of work with governmental and quasi-governmental agencies. I've never ripped anyone off personally, but I have seen and occasionally been an incidental beneficiary of quite a bit of patronage, insider dealing, nepotism, misuse of taxpayer money, and outright corruption. While I have always been honest in my own dealings on a case-by-case basis, I have refrained from many opportunities to be a "whistleblower".<br><br>A lot of stuff on reddit misunderstands the relationships between wealth, power, and influence. For starters, all the above three are always and have always been inter-related, and probably always will be. And that might not always be a bad thing: those who have risen to high levels of wealth are often pretty smart, and surprisingly often exceptionally honest. Those who rise to high levels of influence usually have some pretty good insight and talent in their area of expertise. Those who have acquired a lot of power tend to be good at accomplishing things that lots of people want to see happen. <br><br>None of which is purely democratic, nor even purely meritocratic, but there is a certain dose of both kind of baked into the cake: stuff like wealth or family connections only gets you so far in modern, developed, and relatively open and transparent societies such as the US. And while that can be pretty far by normal standards, at some point sunlight does shine through any crack, and outright robbery or complete incompetence is difficult to sustain indefinitely.<br><br>But there is an awful lot of low-level waste, patronage, and corruption that happens both in the private and in the public sector. <br><br>Without going ideological, the private sector in a free-ish market has a more immediate system of checks and balances if only because you have to actually persuade the end users to keep buying your stuff for the price you're charging: if it's no good, or if you are grossly over-charging, your customers will tend to catch on sooner or later. <br><br>But in the public sector, the "consumer" often has little choice... so-called "market discipline" is a lot more diffuse when you have a former-schoolteacher-or-real-estate-broker-turned city councilman whose job it is to disburse a multi-million-dollar street-paving contract or whatever. And neither the schoolteacher nor the real-estate broker has any clue how to write or evaluate a road-paving contract...<br><br>Let's say that there are three credible bidders for that street-paving contract: <br><br>\* Bidder 1 is "Paver Joe", a local guy with a driveway-paving company and three trucks who sees this as a big opportunity to expand his business and get the city to pay for five new trucks. He puts in a dirt-cheap bid that he wrote up himself with the help of his estate attorney. The cost to taxpayers is very low, but the certainty that he will complete it on schedule and as specified is a little iffy. Paver Joe plans to work overtime and bust his tail on the job, not for profits, but to grow his business. He's offering the taxpayers a great deal, but a slightly risky one.<br><br>\* Bidder 2 is "Muni Paver Inc", a company who has the experience and expertise to do the job, who knows what's involved and who has done this work before. They already have the trucks, their workers are all unionized and paid "prevailing wage", everything will be done by the book, all their EPA certifications are in place, etc... The bid is a lot more expensive than Paver Joe, but it's credible and reliable. They are offering the taxpayers a degree of certainty and confidence that Paver Joe cannot match.<br><br>\* Bidder 3 is me, "Corruptocorp". Instead of Paver Joe's 2-page contract with typos, or Muni-Paving's 20-page contract, I'm offering the city council a full package of videos, brochures, and a 40-page contract with a price just a tad higher than Paver Joe (my quoted price is meaningless, as we will see). Moreover, I'm inviting the city council to Corruptocorp-owned suites in a golf resort near my headquarters to give my presentation (all expenses paid, of course, and of course, bring your spouses). There the city council members will, after the first day of golf, dinner, dancing, and cocktails, see a slideshow and chorus-line of smiling multi-ethnic faces and working mothers talking about how much Corruptocorp's paving improved their town and their lives. I'll then stand up and tell a self-effacing joke about being one of those corporate guys trying to get their money, and then I'll wax a bit emotional about my small-town roots and how Corruptocorp was started by a man with a simple dream to make life better for everyone, and to do well by doing good in local communities, and that we actually plan to hire local contractors such as Joe's Paving to do the work, backed our economies of scale and reliability. I'll mention that paragraph 32 subsection B of our proposal mandates twice-yearly performance reviews by the city council, to of course be held at the golf resort, at Corruptocorp's expense, ("so I hope to see you all back here every February and August!"), and of course I make sure that each of them has my "personal" cell phone and home numbers in case they have any questions....<br><br>So needless to say I get the bid, and six months later it's time for our review at the golf resort. After dinner and cocktails I step up to the podium and announce that there is both good news and bad news: <br><br>\*"The bad news is that our subcontractor has found over 1,000 rocks in the road. And as I'm sure you know, paragraph 339 subsection D.12 specifies that any necessary rock removal will be done at prevailing wages, currently $1,500 per rock, for a total cost overrun of $1.5 million. But the good news is (and believe me, I had to fight long and hard for this with the board of directors), Corruptocorp has agreed to remove those rocks for only $1,000 apiece! So even though there have been some cost overruns, your smart decisions have saved your taxpayers \*\*half a million dollars\*\*! Give yourselves a round of applause!"\*<br><br>\*"Now, the other situation is that there has been some 'difficult terrain' as described in subsection 238b, which I'm sure you're all familiar with. And as you know, 'difficult terrain' is not covered by the contract, which is for paving, not for turning mountains into flat roads... (wistful chuckle). Now, technically, according to the contract, we should be charging your town prevailing rates for these sections, but I've worked it so that you will be allowed to re-bid them, if you wish, since our contract doesn't specifically include terrain as described in subsection 238b."\*<br><br>Now the contract price has doubled, and Corruptocorp has completely sidestepped all of the difficult and costly work, taking profits only on the easy stuff. The city council members can either admit that they were duped and bought (political suicide), or can simply feed corruptocorp's line to the voters. Which do you think will happen?<br><br>And it gets even worse on smaller scales: look up your local building or electrical inspector. Ten-to-one he is a relative, friend, or campaign donor to the mayor or city council. What's in it for him? Every single construction or home improvement project not only has to pay him a fee, it also has to pass his inspection. Guess which contractors are most likely to pass his inspection? His brothers, friends, family... or the cheapest guy who for some reason has a hard time finding work in this town? Guess how the local inspector feels about homeowner self-improvements: does he think they are a great way for regular people to improve their wealth with a little elbow grease, or does he see them as stealing work from his friends and family? <br><br>The US military is by far the most wasteful customer I've ever had. I'll talk about that if this topic gets any interest. <br><br>edit: as promised, here's the post about military spending:<br><br>http://www.reddit.com/r/Economics/comments/c84bp/how\_realworld\_corruption\_works/c0qrt6i</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_eval_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:-----:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0 | 0 | 0.4942 | 0.5028 | 0.5322 | 0.4253 | 0.5249 |
| **0.9231** | **3** | **0.4942** | **0.5028** | **0.5322** | **0.4253** | **0.5249** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
kajamo/model_16
|
kajamo
| 2024-06-10T14:18:06Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-06-10T12:32:15Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
model-index:
- name: model_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_16
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6194
- eval_accuracy: 0.7624
- eval_precision: 0.7632
- eval_recall: 0.7624
- eval_f1: 0.7621
- eval_runtime: 42.8182
- eval_samples_per_second: 285.977
- eval_steps_per_second: 17.89
- epoch: 14.0
- step: 42868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
harveybro/molt5-augmented-default-1200-base-caption2smiles
|
harveybro
| 2024-06-10T14:15:53Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-10T14:15:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
enriquesaou/phi-2-mrqa
|
enriquesaou
| 2024-06-10T14:15:00Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-06-09T17:54:04Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-mrqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/favcowboy/huggingface/runs/2g3ym6o2)
# phi-2-mrqa
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3326 | 0.0267 | 100 | 2.2892 |
| 2.2446 | 0.0533 | 200 | 2.2201 |
| 2.2445 | 0.08 | 300 | 2.2022 |
| 2.1738 | 0.1067 | 400 | 2.1938 |
| 2.2195 | 0.1333 | 500 | 2.1914 |
### Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
baf2b252097d46299a/loss_testing_9d7444b6ecea4be6a45bc1f3f06582ef
|
baf2b252097d46299a
| 2024-06-10T14:14:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T14:14:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Reihaneh/wav2vec2_fy_common_voice_35
|
Reihaneh
| 2024-06-10T14:13:05Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T14:13:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mNLP-project/gpt2-dpo-quantized
|
mNLP-project
| 2024-06-10T14:08:52Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-06-09T08:23:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mbzuai-ugrip-statement-tuning/MBERT_1e-04_32_0.1_0.01_50k
|
mbzuai-ugrip-statement-tuning
| 2024-06-10T14:04:14Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T14:03:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amaumau/gemma-2b-dpo-mnlp
|
amaumau
| 2024-06-10T14:03:53Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-27T12:48:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
llama-duo/gemma2b-summarize-gpt4o-128k
|
llama-duo
| 2024-06-10T14:02:34Z | 11 | 1 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"gemma",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:llama-duo/synth_summarize_dataset_dedup",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-06-05T10:09:42Z |
---
license: gemma
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- llama-duo/synth_summarize_dataset_dedup
model-index:
- name: gemma2b-summarize-gpt4o-128k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma2b-summarize-gpt4o-128k
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the llama-duo/synth_summarize_dataset_dedup dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1249 | 1.0 | 293 | 2.4641 |
| 1.0415 | 2.0 | 586 | 2.4514 |
| 0.9915 | 3.0 | 879 | 2.4750 |
| 0.9551 | 4.0 | 1172 | 2.5292 |
| 0.9287 | 5.0 | 1465 | 2.5925 |
| 0.8733 | 6.0 | 1758 | 2.6555 |
| 0.8577 | 7.0 | 2051 | 2.7316 |
| 0.8364 | 8.0 | 2344 | 2.7742 |
| 0.8311 | 9.0 | 2637 | 2.7971 |
| 0.8243 | 10.0 | 2930 | 2.7978 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
arhamk/ppo-LunarLander-v2-2
|
arhamk
| 2024-06-10T14:02:26Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-10T12:53:17Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -144.04 +/- 92.67
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 5
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'arhamk/ppo-LunarLander-v2-2'
'batch_size': 512
'minibatch_size': 128}
```
|
Hiirop8000/LLM-models-for-personal-use
|
Hiirop8000
| 2024-06-10T14:00:46Z | 1 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T23:40:15Z |
---
license: apache-2.0
---
|
Arash136360/Hamster
|
Arash136360
| 2024-06-10T14:00:14Z | 0 | 0 | null |
[
"license:openrail++",
"region:us"
] | null | 2024-06-10T13:54:37Z |
---
license: openrail++
---
|
tranthaihoa/llama2_evidence
|
tranthaihoa
| 2024-06-10T13:59:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"base_model:finetune:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T13:58:41Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** tranthaihoa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jcordon5/Mistral-7B-cybersecurity-rules
|
jcordon5
| 2024-06-10T13:50:11Z | 18 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-18T11:04:54Z |
---
license: apache-2.0
---
# Fine-Tuned model for threat and intrusion detection rules generation
This model is a fine-tune of [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), via Knowledge Distillation of [0dAI-7.5B](https://huggingface.co/0dAI/0dAI-7.5B-v2).
The fine-tuning was conducted using a curated corpus of 950 cybersecurity rules from SIGMA, YARA, and Suricata repositories for threat and intrusion detection.
Instruct the model to craft a SIGMA rule for detecting potentially malicious commands such as `msfvenom` and `netcat` in Audit system logs, or a Suricata rule to spot SSH brute-force attacks, or even a YARA rule to identify obfuscated strings in files — and watch the magic happen! Automate the creation of rules in your cybersecurity systems with this model.
For an in-depth understanding of how this model has been fine-tuned, refer to the associated paper here: [available soon].
## Key Features
- Fine-tuned on a corpus of cybersecurity threat and intrusion detection rules.
- Expert in generating YARA, Suricata, and SIGMA rules.
- Based on Mistral-7B-Instruct-v0.2, with a 32K context window.
## Quantization
You can easily quantize your model for local use on your computer with the help of the `llama.cpp` or `ollama` libraries. This process converts your model into a format that is optimized for performance, particularly useful for deployment on devices with limited computational resources.
To perform this quantization using the `llama.cpp` library ([link to llama.cpp](https://github.com/ggerganov/llama.cpp)), follow the steps below:
### Step 1: Convert Vocabulary
First, convert your model's vocabulary to a format suitable for quantization. Use the following command, replacing `/path/to/` with the actual path to your model files:
```bash
python convert.py /path/to/Mistral-7B-cybersecurity-rules \
--vocab-only \
--outfile /path/to/Mistral-7B-cybersecurity-rules/tokenizer.model \
--vocab-type bpe
```
This command extracts and converts the vocabulary using the byte pair encoding (BPE) method, saving it to a new file.
### Step 2: Prepare Model for Quantization
Next, prepare the model for quantization by converting it to a half-precision floating-point format (FP16). This step reduces the model size and prepares it for the final quantization to 8-bit integers. Execute the following command:
```bash
python convert.py \
--outtype f16 \
--vocab-type bpe \ # Add this line only if you encounter issues with the vocabulary type
--outfile /path/to/Mistral-7B-cybersecurity-rules/ggml-model-f16.gguf
```
This command outputs a file that has been converted to FP16, which is an intermediary step before applying 8-bit quantization.
### Step 3: Quantize to 8-bits
Finally, apply 8-bit quantization to the FP16 model file. This step significantly reduces the model's memory footprint, making it suitable for deployment in resource-constrained environments:
```bash
quantize /path/to/Mistral-7B-cybersecurity-rules/ggml-model-f16.gguf \
/path/to/Mistral-7B-cybersecurity-rules/mistral-7b-rules-q8_0.gguf \
q8_0
```
Here, the `quantize` command converts the FP16 model into an 8-bit quantized model, further compressing the model while retaining its capability to perform its tasks effectively.
## License
This repository is licensed under the Apache License, Version 2.0. You can obtain a copy of the license at [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0).
## Warranty Disclaimer
This software is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## Changes
This model has been fine-tuned based on the original Mistral-7B-Instruct-v0.2. Significant modifications were made to train it on a cybersecurity corpus for threat and intrusion detection.
|
ssmits/Falcon2-5.5B-multilingual-embed-base
|
ssmits
| 2024-06-10T13:48:31Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"falcon",
"ssmits/Falcon2-5.5B-multilingual",
"text-classification",
"custom_code",
"es",
"fr",
"de",
"no",
"sv",
"da",
"nl",
"pt",
"pl",
"ro",
"it",
"cs",
"base_model:ssmits/Falcon2-5.5B-multilingual",
"base_model:finetune:ssmits/Falcon2-5.5B-multilingual",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2024-06-08T18:39:16Z |
---
base_model:
- ssmits/Falcon2-5.5B-multilingual
library_name: sentence-transformers
tags:
- ssmits/Falcon2-5.5B-multilingual
license: apache-2.0
language:
- es
- fr
- de
- 'no'
- sv
- da
- nl
- pt
- pl
- ro
- it
- cs
pipeline_tag: text-classification
---
## Usage
Embeddings version of the base model [ssmits/Falcon2-5.5B-multilingual](https://huggingface.co/ssmits/Falcon2-5.5B-multilingual/edit/main/README.md).
The 'lm_head' layer of this model has been removed, which means it can be used for embeddings. It will not perform greatly, as it needs to be further fine-tuned, as it is pruned and shown by [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct).
Additionaly, in stead of a normalization layer, the hidden layers are followed up by both a classical weight and bias 1-dimensional array of 4096 values.
The basic Sentence-Transformers implementation is working correctly. This would imply other more sophisticated embeddings techniques such as adding a custom classification head, will work correctly as well.
## Inference (sentence-transformers)
```python
from sentence_transformers import SentenceTransformer
import torch
# 1. Load a pretrained Sentence Transformer model
model = SentenceTransformer("ssmits/Falcon2-5.5B-multilingual-embed-base") # device = "cpu" when <= 24 GB VRAM
# The sentences to encode
sentences = [
"The weather is lovely today.",
"It's so sunny outside!",
"He drove to the stadium.",
]
# 2. Calculate embeddings by calling model.encode()
embeddings = model.encode(sentences)
print(embeddings.shape)
# (3, 4096)
# 3. Calculate the embedding similarities
# Using torch to compute cosine similarity matrix
similarities = torch.nn.functional.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
print(similarities)
# tensor([[1.0000, 0.7120, 0.5937],
# [0.7120, 1.0000, 0.5925],
# [0.5937, 0.5925, 1.0000]])
```
Note: In my tests it utilizes more than 24GB (RTX 4090), so an A100 or A6000 would be required for inference.
## Inference (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ssmits/Falcon2-5.5B-multilingual-embed-base')
model = AutoModel.from_pretrained('ssmits/Falcon2-5.5B-multilingual-embed-base') # device = "cpu" when <= 24 GB VRAM
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
### How to enable Multi-GPU
```python
from transformers import AutoModel
from torch.nn import DataParallel
model = AutoModel.from_pretrained("ssmits/Falcon2-5.5B-multilingual-embed-base")
for module_key, module in model._modules.items():
model._modules[module_key] = DataParallel(module)
```
|
datek/Qwen-Qwen1.5-1.8B-1718026911
|
datek
| 2024-06-10T13:44:05Z | 129 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T13:42:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xuanye/llama3_question
|
xuanye
| 2024-06-10T13:44:00Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"region:us"
] | null | 2024-05-23T10:04:28Z |
---
tags:
- generated_from_trainer
model-index:
- name: llama3_question
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3_question
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9948 | 0.14 | 1 | 2.8184 |
| 2.8697 | 0.29 | 2 | 2.6592 |
| 2.6264 | 0.43 | 3 | 2.4946 |
| 2.625 | 0.57 | 4 | 2.3588 |
| 2.3888 | 0.71 | 5 | 2.2385 |
| 2.2949 | 0.86 | 6 | 2.1219 |
| 2.5261 | 1.0 | 7 | 2.0221 |
| 2.0264 | 1.14 | 8 | 1.9246 |
| 1.9661 | 1.29 | 9 | 1.8298 |
| 1.9106 | 1.43 | 10 | 1.7456 |
| 1.8448 | 1.57 | 11 | 1.6686 |
| 1.619 | 1.71 | 12 | 1.6050 |
| 1.5881 | 1.86 | 13 | 1.5468 |
| 1.6859 | 2.0 | 14 | 1.4939 |
| 1.4643 | 2.14 | 15 | 1.4453 |
| 1.4583 | 2.29 | 16 | 1.3949 |
| 1.4086 | 2.43 | 17 | 1.3441 |
| 1.3314 | 2.57 | 18 | 1.2914 |
| 1.3502 | 2.71 | 19 | 1.2400 |
| 1.226 | 2.86 | 20 | 1.1892 |
| 1.073 | 3.0 | 21 | 1.1445 |
| 1.1113 | 3.14 | 22 | 1.0995 |
| 1.1292 | 3.29 | 23 | 1.0570 |
| 1.0242 | 3.43 | 24 | 1.0164 |
| 0.9279 | 3.57 | 25 | 0.9826 |
| 0.8518 | 3.71 | 26 | 0.9617 |
| 1.0302 | 3.86 | 27 | 0.9491 |
| 1.1736 | 4.0 | 28 | 0.9418 |
| 0.8832 | 4.14 | 29 | 0.9352 |
| 0.9151 | 4.29 | 30 | 0.9301 |
| 0.7495 | 4.43 | 31 | 0.9256 |
| 0.8785 | 4.57 | 32 | 0.9220 |
| 0.8635 | 4.71 | 33 | 0.9180 |
| 0.9499 | 4.86 | 34 | 0.9150 |
| 0.8744 | 5.0 | 35 | 0.9125 |
| 0.8221 | 5.14 | 36 | 0.9093 |
| 0.7826 | 5.29 | 37 | 0.9064 |
| 0.8421 | 5.43 | 38 | 0.9047 |
| 0.8155 | 5.57 | 39 | 0.9029 |
| 0.9097 | 5.71 | 40 | 0.9010 |
| 0.7449 | 5.86 | 41 | 0.9003 |
| 0.9502 | 6.0 | 42 | 0.8999 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.37.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.1
|
DownwardSpiral33/gpt2-imdb-pos-4c2-d6-reward-256_0_05-rewardprompts-2024.06.10.13.36
|
DownwardSpiral33
| 2024-06-10T13:43:37Z | 129 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T13:43:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
martimfasantos/tinyllama-1.1b-sum-dpo-full_LR1e-7_3epochs
|
martimfasantos
| 2024-06-10T13:40:44Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:openai/summarize_from_feedback",
"base_model:martimfasantos/tinyllama-1.1b-sum-sft-full",
"base_model:finetune:martimfasantos/tinyllama-1.1b-sum-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-09T13:35:01Z |
---
license: apache-2.0
base_model: martimfasantos/tinyllama-1.1b-sum-sft-full
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- openai/summarize_from_feedback
model-index:
- name: tinyllama-1.1b-sum-dpo-full_LR1e-7_3epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-1.1b-sum-dpo-full_LR1e-7_3epochs
This model is a fine-tuned version of [martimfasantos/tinyllama-1.1b-sum-sft-full](https://huggingface.co/martimfasantos/tinyllama-1.1b-sum-sft-full) on the openai/summarize_from_feedback dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6501
- Rewards/chosen: -1.0591
- Rewards/rejected: -1.2329
- Rewards/accuracies: 0.6032
- Rewards/margins: 0.1739
- Logps/rejected: -186.0431
- Logps/chosen: -164.9210
- Logits/rejected: -2.3430
- Logits/chosen: -2.3551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.693 | 0.0689 | 400 | 0.6931 | 0.0003 | 0.0002 | 0.5112 | 0.0001 | -62.7270 | -58.9858 | -2.9691 | -2.9727 |
| 0.6923 | 0.1378 | 800 | 0.6926 | 0.0024 | 0.0012 | 0.5493 | 0.0011 | -62.6258 | -58.7797 | -2.9667 | -2.9701 |
| 0.6901 | 0.2068 | 1200 | 0.6907 | -0.0080 | -0.0133 | 0.5697 | 0.0053 | -64.0827 | -59.8146 | -2.9579 | -2.9613 |
| 0.6835 | 0.2757 | 1600 | 0.6880 | -0.0321 | -0.0436 | 0.5764 | 0.0114 | -67.1050 | -62.2266 | -2.9410 | -2.9442 |
| 0.6865 | 0.3446 | 2000 | 0.6852 | -0.0690 | -0.0874 | 0.5713 | 0.0184 | -71.4878 | -65.9158 | -2.9158 | -2.9192 |
| 0.6767 | 0.4135 | 2400 | 0.6817 | -0.1086 | -0.1352 | 0.5816 | 0.0265 | -76.2651 | -69.8803 | -2.8906 | -2.8938 |
| 0.6726 | 0.4824 | 2800 | 0.6792 | -0.1614 | -0.1943 | 0.5767 | 0.0328 | -82.1753 | -75.1597 | -2.8617 | -2.8651 |
| 0.6643 | 0.5513 | 3200 | 0.6729 | -0.2581 | -0.3074 | 0.5948 | 0.0493 | -93.4915 | -84.8225 | -2.8387 | -2.8420 |
| 0.6614 | 0.6203 | 3600 | 0.6740 | -0.2589 | -0.3059 | 0.5904 | 0.0470 | -93.3416 | -84.9094 | -2.8113 | -2.8144 |
| 0.6609 | 0.6892 | 4000 | 0.6696 | -0.3009 | -0.3603 | 0.6053 | 0.0594 | -98.7785 | -89.1073 | -2.7879 | -2.7912 |
| 0.6562 | 0.7581 | 4400 | 0.6667 | -0.4072 | -0.4790 | 0.5983 | 0.0718 | -110.6499 | -99.7330 | -2.7515 | -2.7548 |
| 0.6569 | 0.8270 | 4800 | 0.6637 | -0.4951 | -0.5782 | 0.6059 | 0.0831 | -120.5742 | -108.5273 | -2.7283 | -2.7316 |
| 0.6383 | 0.8959 | 5200 | 0.6621 | -0.5180 | -0.6112 | 0.6055 | 0.0932 | -123.8654 | -110.8119 | -2.7112 | -2.7149 |
| 0.6411 | 0.9649 | 5600 | 0.6623 | -0.5228 | -0.6134 | 0.6055 | 0.0906 | -124.0929 | -111.2965 | -2.6869 | -2.6910 |
| 0.6293 | 1.0338 | 6000 | 0.6618 | -0.6210 | -0.7260 | 0.6064 | 0.1049 | -135.3463 | -121.1192 | -2.6526 | -2.6573 |
| 0.6247 | 1.1027 | 6400 | 0.6587 | -0.7088 | -0.8268 | 0.5990 | 0.1180 | -145.4310 | -129.8984 | -2.6201 | -2.6254 |
| 0.6194 | 1.1716 | 6800 | 0.6580 | -0.7955 | -0.9191 | 0.5980 | 0.1236 | -154.6599 | -138.5692 | -2.5858 | -2.5912 |
| 0.6127 | 1.2405 | 7200 | 0.6558 | -0.6612 | -0.7815 | 0.6039 | 0.1203 | -140.8955 | -125.1357 | -2.5822 | -2.5877 |
| 0.6531 | 1.3094 | 7600 | 0.6534 | -0.7460 | -0.8804 | 0.6041 | 0.1344 | -150.7862 | -133.6133 | -2.5502 | -2.5564 |
| 0.5995 | 1.3784 | 8000 | 0.6528 | -0.8128 | -0.9555 | 0.6006 | 0.1427 | -158.2948 | -140.2942 | -2.5195 | -2.5267 |
| 0.61 | 1.4473 | 8400 | 0.6540 | -0.7310 | -0.8603 | 0.5980 | 0.1293 | -148.7821 | -132.1185 | -2.5198 | -2.5268 |
| 0.6575 | 1.5162 | 8800 | 0.6527 | -0.8369 | -0.9764 | 0.5997 | 0.1395 | -160.3900 | -142.7025 | -2.4947 | -2.5022 |
| 0.5969 | 1.5851 | 9200 | 0.6516 | -0.8922 | -1.0366 | 0.6101 | 0.1444 | -166.4089 | -148.2315 | -2.4661 | -2.4746 |
| 0.6211 | 1.6540 | 9600 | 0.6526 | -0.7875 | -0.9248 | 0.6094 | 0.1373 | -155.2340 | -137.7698 | -2.4725 | -2.4804 |
| 0.6011 | 1.7229 | 10000 | 0.6517 | -0.8912 | -1.0379 | 0.6099 | 0.1467 | -166.5410 | -148.1359 | -2.4396 | -2.4489 |
| 0.571 | 1.7919 | 10400 | 0.6514 | -0.8234 | -0.9653 | 0.6122 | 0.1419 | -159.2782 | -141.3557 | -2.4401 | -2.4489 |
| 0.5889 | 1.8608 | 10800 | 0.6506 | -1.0172 | -1.1751 | 0.6055 | 0.1579 | -180.2568 | -160.7332 | -2.3932 | -2.4039 |
| 0.5685 | 1.9297 | 11200 | 0.6486 | -1.0256 | -1.1907 | 0.5992 | 0.1651 | -181.8200 | -161.5783 | -2.3887 | -2.3992 |
| 0.63 | 1.9986 | 11600 | 0.6502 | -0.8869 | -1.0380 | 0.6004 | 0.1511 | -166.5461 | -147.7054 | -2.4012 | -2.4108 |
| 0.5891 | 2.0675 | 12000 | 0.6490 | -1.0453 | -1.2122 | 0.6046 | 0.1670 | -183.9714 | -163.5418 | -2.3713 | -2.3825 |
| 0.5808 | 2.1365 | 12400 | 0.6490 | -1.1906 | -1.3718 | 0.6039 | 0.1811 | -199.9255 | -178.0778 | -2.3382 | -2.3508 |
| 0.6051 | 2.2054 | 12800 | 0.6496 | -1.0959 | -1.2648 | 0.6053 | 0.1689 | -189.2301 | -168.6040 | -2.3542 | -2.3658 |
| 0.6223 | 2.2743 | 13200 | 0.6502 | -1.0865 | -1.2588 | 0.6069 | 0.1723 | -188.6267 | -167.6660 | -2.3460 | -2.3579 |
| 0.6245 | 2.3432 | 13600 | 0.6506 | -1.0806 | -1.2530 | 0.5983 | 0.1724 | -188.0497 | -167.0715 | -2.3462 | -2.3583 |
| 0.5716 | 2.4121 | 14000 | 0.6511 | -1.0306 | -1.1979 | 0.5941 | 0.1672 | -182.5368 | -162.0786 | -2.3533 | -2.3651 |
| 0.6078 | 2.4810 | 14400 | 0.6506 | -1.0889 | -1.2642 | 0.6004 | 0.1753 | -189.1684 | -167.9059 | -2.3417 | -2.3540 |
| 0.6112 | 2.5500 | 14800 | 0.6500 | -1.1067 | -1.2865 | 0.5971 | 0.1798 | -191.4036 | -169.6898 | -2.3390 | -2.3514 |
| 0.5773 | 2.6189 | 15200 | 0.6508 | -1.0435 | -1.2146 | 0.6025 | 0.1712 | -184.2123 | -163.3605 | -2.3468 | -2.3588 |
| 0.5983 | 2.6878 | 15600 | 0.6505 | -1.0660 | -1.2397 | 0.6018 | 0.1737 | -186.7185 | -165.6157 | -2.3419 | -2.3540 |
| 0.5983 | 2.7567 | 16000 | 0.6501 | -1.0707 | -1.2465 | 0.6029 | 0.1758 | -187.3989 | -166.0839 | -2.3408 | -2.3530 |
| 0.5956 | 2.8256 | 16400 | 0.6500 | -1.0594 | -1.2333 | 0.6008 | 0.1739 | -186.0803 | -164.9520 | -2.3429 | -2.3550 |
| 0.6221 | 2.8946 | 16800 | 0.6499 | -1.0592 | -1.2333 | 0.6041 | 0.1742 | -186.0846 | -164.9336 | -2.3430 | -2.3551 |
| 0.6096 | 2.9635 | 17200 | 0.6500 | -1.0595 | -1.2334 | 0.6046 | 0.1739 | -186.0905 | -164.9614 | -2.3429 | -2.3549 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
cs552-mlp/phi3-arc
|
cs552-mlp
| 2024-06-10T13:38:56Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"region:us"
] | null | 2024-06-10T13:38:40Z |
---
library_name: peft
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
Kubermatic/DeepCNCF
|
Kubermatic
| 2024-06-10T13:37:52Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:google/gemma-1.1-7b-it",
"base_model:adapter:google/gemma-1.1-7b-it",
"license:mit",
"region:us"
] | null | 2024-06-03T22:07:22Z |
---
license: mit
base_model: google/gemma-1.1-7b-it
library_name: peft
---
|
qubvel-hf/ahnet
|
qubvel-hf
| 2024-06-10T13:37:04Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T13:22:19Z |
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
cs552-mlp/phi3-openbookqa
|
cs552-mlp
| 2024-06-10T13:31:07Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"region:us"
] | null | 2024-06-10T13:30:44Z |
---
library_name: peft
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
bakanaims/bert-uncased-AG-News
|
bakanaims
| 2024-06-10T13:30:09Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-10T13:29:21Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-uncased-AG-News
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-AG-News
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7208
- Balanced Accuracy: 0.8720
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Balanced Accuracy | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.3219 | 1.0 | 25 | 0.8636 | 0.7889 | 0.79 |
| 0.6342 | 2.0 | 50 | 0.5691 | 0.8689 | 0.86 |
| 0.2991 | 3.0 | 75 | 0.5546 | 0.8602 | 0.86 |
| 0.1403 | 4.0 | 100 | 0.6923 | 0.8719 | 0.8667 |
| 0.0561 | 5.0 | 125 | 0.7208 | 0.8720 | 0.8667 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
vicky4s4s/danube2-1.8b-chat
|
vicky4s4s
| 2024-06-10T13:28:56Z | 129 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"arxiv:2401.16818",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T13:14:16Z |
---
language:
- en
library_name: transformers
license: apache-2.0
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
pipeline_tag: text-generation
---
# Model Card
## Summary
h2o-danube2-1.8b-chat is a chat fine-tuned model by H2O.ai with 1.8 billion parameters. We release three versions of this model:
| Model Name | Description |
|:-----------------------------------------------------------------------------------|:----------------|
| [h2oai/h2o-danube2-1.8b-base](https://huggingface.co/h2oai/h2o-danube2-1.8b-base) | Base model |
| [h2oai/h2o-danube2-1.8b-sft](https://huggingface.co/h2oai/h2o-danube2-1.8b-sft) | SFT tuned |
| [h2oai/h2o-danube2-1.8b-chat](https://huggingface.co/h2oai/h2o-danube2-1.8b-chat) | SFT + DPO tuned |
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
## Model Architecture
We adjust the Llama 2 architecture for a total of around 1.8b parameters. For details, please refer to our [Technical Report](https://arxiv.org/abs/2401.16818). We use the Mistral tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 8,192.
The details of the model architecture are:
| Hyperparameter | Value |
|:----------------|:-------|
| n_layers | 24 |
| n_heads | 32 |
| n_query_groups | 8 |
| n_embd | 2560 |
| vocab size | 32000 |
| sequence length | 8192 |
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers>=4.39.3
```
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="vicky4s4s/danube2-1.8b-chat",
torch_dtype=torch.bfloat16,
device_map="auto",
)
# We use the HF Tokenizer chat template to format each message
# https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "Why is drinking water so healthy?"},
]
prompt = pipe.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
res = pipe(
prompt,
max_new_tokens=256,
)
print(res[0]["generated_text"])
```
This will apply and run the correct prompt format out of the box:
```
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 2560, padding_idx=0)
(layers): ModuleList(
(0-23): 24 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear(in_features=2560, out_features=2560, bias=False)
(k_proj): Linear(in_features=2560, out_features=640, bias=False)
(v_proj): Linear(in_features=2560, out_features=640, bias=False)
(o_proj): Linear(in_features=2560, out_features=2560, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(in_features=2560, out_features=6912, bias=False)
(up_proj): Linear(in_features=2560, out_features=6912, bias=False)
(down_proj): Linear(in_features=6912, out_features=2560, bias=False)
(act_fn): SiLU()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=2560, out_features=32000, bias=False)
)
```
## Benchmarks
### 🤗 Open LLM Leaderboard
| Benchmark | acc_n |
|:--------------|:--------:|
| Average | 48.44 |
| ARC-challenge | 43.43 |
| Hellaswag | 73.54 |
| MMLU | 37.77 |
| TruthfulQA | 39.96 |
| Winogrande | 69.77 |
| GSM8K | 26.16 |
### MT-Bench
```
First Turn: 6.23
Second Turn: 5.34
Average: 5.79
```

## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
iloncka/exp_5_old_bg_raw-subs_1_v_5_convnext_nano.in12k_ft_in1k_ep_60
|
iloncka
| 2024-06-10T13:26:57Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-06-10T13:25:41Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
datek/Qwen-Qwen1.5-0.5B-1718025779
|
datek
| 2024-06-10T13:24:07Z | 129 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T13:23:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nick20241/1
|
Nick20241
| 2024-06-10T13:23:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-06-10T13:23:24Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.