modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 00:37:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 537
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 00:37:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
luna-code/sqlmodel-codeparrot-small-prefix
|
luna-code
| 2024-02-16T19:57:47Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:codeparrot/codeparrot-small",
"base_model:adapter:codeparrot/codeparrot-small",
"region:us"
] | null | 2024-02-14T21:24:06Z |
---
library_name: peft
base_model: codeparrot/codeparrot-small
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
luna-code/sqlmodel-CodeGPT-small-py-prefix
|
luna-code
| 2024-02-16T19:57:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/CodeGPT-small-py",
"base_model:adapter:microsoft/CodeGPT-small-py",
"region:us"
] | null | 2024-02-14T20:35:47Z |
---
library_name: peft
base_model: microsoft/CodeGPT-small-py
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
pruhtopia/bert-toc-classification
|
pruhtopia
| 2024-02-16T19:54:00Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:autotrain-rqsu1-nelrs/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-16T19:53:49Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- autotrain-rqsu1-nelrs/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.8727797865867615
f1_macro: 0.4743843561639563
f1_micro: 0.7148333333333333
f1_weighted: 0.6973841930196186
precision_macro: 0.689196184834718
precision_micro: 0.7148333333333333
precision_weighted: 0.7078896728033317
recall_macro: 0.41641019062275997
recall_micro: 0.7148333333333333
recall_weighted: 0.7148333333333333
accuracy: 0.7148333333333333
|
haochenhe/lab1_random
|
haochenhe
| 2024-02-16T19:53:49Z | 119 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-16T04:26:50Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
model-index:
- name: lab1_random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab1_random
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
longluu/Medical-QA-deberta-MRQA-COVID-QA
|
longluu
| 2024-02-16T19:51:14Z | 101 | 2 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"question-answering",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-16T19:45:35Z |
---
license: mit
pipeline_tag: question-answering
widget:
- text: "How many children were infected by HIV-1 in 2008-2009, worldwide?"
context: "Functional Genetic Variants in DC-SIGNR Are Associated with Mother-to-Child Transmission of HIV-1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2752805/ Boily-Larouche, Geneviève; Iscache, Anne-Laure; Zijenah, Lynn S.; Humphrey, Jean H.; Mouland, Andrew J.; Ward, Brian J.; Roger, Michel 2009-10-07 DOI:10.1371/journal.pone.0007211 License:cc-by Abstract: BACKGROUND: Mother-to-child transmission (MTCT) is the main cause of HIV-1 infection in children worldwide. Given that the C-type lectin receptor, dendritic cell-specific ICAM-grabbing non-integrin-related (DC-SIGNR, also known as CD209L or liver/lymph node–specific ICAM-grabbing non-integrin (L-SIGN)), can interact with pathogens including HIV-1 and is expressed at the maternal-fetal interface, we hypothesized that it could influence MTCT of HIV-1. METHODS AND FINDINGS: To investigate the potential role of DC-SIGNR in MTCT of HIV-1, we carried out a genetic association study of DC-SIGNR in a well-characterized cohort of 197 HIV-infected mothers and their infants recruited in Harare, Zimbabwe. Infants harbouring two copies of DC-SIGNR H1 and/or H3 haplotypes (H1-H1, H1-H3, H3-H3) had a 3.6-fold increased risk of in utero (IU) (P = 0.013) HIV-1 infection and a 5.7-fold increased risk of intrapartum (IP) (P = 0.025) HIV-1 infection after adjusting for a number of maternal factors. The implicated H1 and H3 haplotypes share two single nucleotide polymorphisms (SNPs) in promoter region (p-198A) and intron 2 (int2-180A) that were associated with increased risk of both IU (P = 0.045 and P = 0.003, respectively) and IP (P = 0.025, for int2-180A) HIV-1 infection. The promoter variant reduced transcriptional activity in vitro. In homozygous H1 infants bearing both the p-198A and int2-180A mutations, we observed a 4-fold decrease in the level of placental DC-SIGNR transcripts, disproportionately affecting the expression of membrane-bound isoforms compared to infant noncarriers (P = 0.011). CONCLUSION: These results suggest that DC-SIGNR plays a crucial role in MTCT of HIV-1 and that impaired placental DC-SIGNR expression increases risk of transmission. Text: Without specific interventions, the rate of HIV-1 mother-tochild transmission (MTCT) is approximately 15-45% [1] . UNAIDS estimates that last year alone, more than 400,000 children were infected worldwide, mostly through MTCT and 90% of them lived in sub-Saharan Africa. In the most heavilyaffected countries, such as Zimbabwe, HIV-1 is responsible for one third of all deaths among children under the age of five. MTCT of HIV-1 can occur during pregnancy (in utero, IU), delivery (intrapartum, IP) or breastfeeding (postpartum, PP). High maternal viral load, low CD4 cells count, vaginal delivery, low gestational age have all been identified as independent factors associated with MTCT of HIV-1 [1] . Although antiretrovirals can reduce MTCT to 2%, limited access to timely diagnostics and drugs in many developing world countries limits the potential impact of this strategy. A better understanding of the mechanisms acting at the maternal-fetal interface is crucial for the design of alternative interventions to antiretroviral therapy for transmission prevention. Dendritic cell-specific ICAM-grabbing non-integrin-related (DC-SIGNR, also known as CD209L or liver/lymph node-specific ICAM-grabbing non-integrin (L-SIGN)) can interact with a plethora of pathogens including HIV-1 and is expressed in placental capillary endothelial cells [2] . DC-SIGNR is organized in three distinct domains, an N-terminal cytoplasmic tail, a repeat region containing seven repeat of 23 amino acids and a C-terminal domain implicated in pathogen binding. Alternative splicing of DC-SIGNR gene leads to the production of a highly diversify isoforms repertoire which includes membrane-bound and soluble isoforms [3] . It has been proposed that interaction between DC-SIGNR and HIV-1 might enhance viral transfer to other susceptible cell types [2] but DC-SIGNR can also internalize and mediate proteasome-dependant degradation of viruses [4] that may differently affect the outcome of infection. Given the presence of DC-SIGNR at the maternal-fetal interface and its interaction with HIV-1, we hypothesized that it could influence MTCT of HIV-1. To investigate the potential role of DC-SIGNR in MTCT of HIV-1, we carried out a genetic association study of DC-SIGNR in a well-characterized cohort of HIV-infected mothers and their infants recruited in Zimbabwe, and identified specific DC-SIGNR variants associated with increased risks of HIV transmission. We further characterized the functional impact of these genetic variants on DC-SIGNR expression and show that they affect both the level and type of DC-SIGNR transcripts produced in the placenta. Samples consisted of stored DNA extracts obtained from 197 mother-child pairs co-enrolled immediately postpartum in the ZVITAMBO Vitamin A supplementation trial (Harare, Zimbabwe) and followed at 6 weeks, and 3-monthly intervals up to 24 months. The ZVITAMBO project was a randomized placebocontrolled clinical trial that enrolled 14,110 mother-child pairs, between November 1997 and January 2000, with the main objective of investigating the impact of immediate postpartum vitamin A supplementation on MTCT of HIV-1. The samples used in the present study were from mother-child pairs randomly assigned to the placebo group of the ZVITAMBO project. Antiretroviral prophylaxis for HIV-1-positive antenatal women was not available in the Harare public-sector during ZVITAMBO patient recruitment. The samples were consecutively drawn from two groups: 97 HIV-1-positive mother/HIV-1-positive child pairs and 100 HIV-1-positive mother/HIV-negative child pairs. Mother's serological status was determined by ELISA and confirmed by Western Blot. Infants were considered to be infected if they were HIV-1 seropositive at 18 months or older and had two or more positive HIV-1-DNA polymerase chain reaction (PCR) results at earlier ages. 100 infants were considered to be uninfected as they were ELISA negative at 18 months or older and had two DNA PCR negative results from samples collected at a younger age. Of the 97 HIV-1-infected infants, 57 were infected IU, 11 were infected IP, and 17 were infected PP as determined by PCR analyses of blood samples collected at birth, 6 weeks, 3 and 6 months of age and according to the following definitions adapted from Bryson and colleagues [5] . Briefly, infants who were DNA PCR positive at birth were infected IU. Infants with negative PCR results from sample obtained at birth but who become positive by 6 weeks of age were infected IP. Infants with negative PCR results at birth and 6 weeks of age but who subsequently became DNA PCR positive were considered to be infected during the PP period. In the analysis comparing the 3 different modes of MTCT, 12 HIV-1-infected infants were excluded because the PCR results were not available at 6 weeks of age. Full methods for recruitment, baseline characteristics collection, laboratory procedures have been described elsewhere [6] . The nucleotide sequence variation of the entire promoter, coding and part of 39-UTR regions of DC-SIGNR gene in the study population was determined previously [7] . Haplotype reconstruction was performed using Bayesian statistical method implemented in PHASE [8] , version 2.1.1, using single nucleotide polymorphism (SNP) with a minimum allele frequency (MAF) of 2%. We applied the algorithm five times, using different randomly generated seeds, and consistent results were obtained across runs ( Figure 1 ). Fifteen haplotype-tagged SNPs (htSNPs) were identified by the HaploBlockFinder software [9] with a MAF $5%. These htSNPs were genotyped in the 197 infants by direct PCR sequencing analysis as we have described previously [7] . The DC-SIGNR exon 4 repeat region genotype was determined by PCR amplification followed by migration in 1.5% agarose gels [10] . DNA sequences in the promoter region were analysed with the TESS interface (http//:www.cbil.upenn.edu/tess) for putative transcription factors binding sites using the TRANSFAC database. Luciferase reporter assays using pGL2-Basic vector were performed in order to investigate the functional effect of mutations on DC-SIGNR promoter activity. Genomic DNA from subjects homozygous for the promoter variants and WT was amplified from nucleotide position 2715 to 21 and cloned between the BglII and HindIII multiple cloning sites in the pGL2-Basic vector which harbours a reporter firefly luciferase gene downstream (Invitrogen Canada inc, Burlington, Canada). All recombinants clones were verified by DNA sequencing. The firefly luciferase test reporter vector was co-transfected at a ratio of 10:1 with the constitutive expressor of Renilla luciferase, phRL-CMV (Promega, Madison, WI, USA). We cultured HeLa cells in 6 wells plates (2610 5 cells) and transfected them the following day using lipofectamine (Invitrogen) according to the manufacturer. Cells were lysed and luciferase assays were performed using 20 mg of protein extract according to the manufacturer (Promega) at 44 h post-transfection. Firefly luciferase activity was normalized to Renilla luciferase activity. 0 mg, 0,5 mg or 1 mg CMV-Tat vector was transfected with LTR-Luc as a positive control in these experiments. We carried out lucierase assays in triplicate in three independent experiments. Results are expressed as mean6 standard error of the mean (S.E.M). First-term placental tissues were obtained from abortions following voluntary interruption of pregnancy at CHUM Hôpital Saint-Luc (Montreal, Canada). Tissues from 3 H1 (associated with MTCT of HIV-1) and 3 H15 (wild-type) homozygous haplotypes were used to analyse possible differences in isoform expression. Total placental RNAs were extracted by MasterPure DNA and RNA Extraction Kit (Epicentre Biotechnologies, Madison, WI, USA) according to the manufacturer. Fragments corresponding to the DC-SIGNR coding region were reversed transcribed (RT) and then amplified by nested PCR with the following primers; RT primers RR, first PCR RF and RR and second PCR RcF and RcR according to Liu and colleagues [11] . 1 mg of total RNA was reverse transcribed with Expand RT (Roche Applied Science, Indianapolis, IN, USA) according to the manufacturer and were PCR-amplified with DNA Platinum Taq Polymerase (Invitrogen). Major PCR products from the second PCR reaction were gel extracted with the Qiagen Gel Extraction Kit (Qiagen Canada inc, Mississauga, ON, Canada) and cloned using the TOPO TA Cloning Kit for sequencing (Invitrogen). For each placenta, 15 different clones were randomly selected and amplified with M13 primers and sequenced with ABI PRISM 3100 capillary automated sequencer (Applied Biosystems, Foster City, CA, USA). Sequences were analysed and aligned with GeneBank reference sequence NM_014257 using Lasergene software (DNA Stars, Madison, WI, USA). Quantitative expression of DC-SIGNR isoforms 1,5 mg of placental RNA was reverse transcribed using 2.5 mM of Oligo dT 20 and Expand RT in 20 ml volume according to the manufacturer (Roche Applied Science). 15 ng of total cDNA in a final volume of 20 ml was used to perform quantitative real-time PCR using Universal Express SYBR GreenER qPCR Supermix (Invitrogen) on a Rotor Gene Realtime Rotary Analyser (Corbett Life Science, Sydney, Australia). Samples from 2 subjects in each group were used because RNA quality of others was not suitable for a qRT-PCR analysis. Amplification of all DC-SIGNR isoforms was performed using an exon 5 specific primer pair (Table S1 ). Membrane-bound isoforms were amplified using primers specific for exon 3, corresponding to the common trans-membrane domain of DC-SIGNR. Primers were targeted to the exon-exon junction and RNA extracts were treated with DNase (Fermantas International inc, Burlington, ON, Canada) to avoid amplification of contaminant DNA. Standard curves (50-500 000 copies per reaction) were generated using serial dilution of a full-length DC-SIGNR or commercial GAPDH (Invitrogen) plasmid DNA. All qPCR reactions had efficiencies ranging from 99% to 100%, even in the presence of 20 ng of non-specific nucleic acids, and therefore could be compared. The copy number of unknown samples was estimated by placing the measured PCR cycle number (crossing threshold) on the standard curve. To correct for differences in both RNA quality and quantity between samples, the expression levels of transcripts were normalised to the reference GAPDH gene transcripts. GAPDH primer sequences were kindly provided by A. Mes-Masson at the CHUM. The results are presented as target gene copy number per 10 5 copies of GAPDH. The ratio of membrane-bound isoforms was calculated as E3/E5. Soluble isoforms were calculated by subtracting membrane-bound from total isoforms. We carried out qPCR assays in triplicate in three independent experiments. Results are expressed as mean6S.E.M. Statistical analysis was performed using the GraphPad PRISM 5.0 for Windows (GraphPad Software inc, San Diego, CA, USA). Differences in baseline characteristics and genotypic frequencies of haplotypes or htSNPs were compared between groups using the x 2 analysis or Fisher's exact test. Logistic regression analysis was used to estimate odds ratios (OR) for each genotype and baseline risk factors. Multiple logistic regression was used to define independent predictors identified as significant in the crude analysis. ORs and 95% confidence interval were calculated with the exact method. Comparisons of continuous variables between groups were assessed with the unpaired two-tailed Student's t test when variables were normally distributed and with the Mann-Whitney U test when otherwise. Differences were considered significant at P,0.05. Written informed consent was obtained from all mothers who participated in the study and the ZVITAMBO trial and the investigation reported in this paper were approved by The We carried out an association study of DC-SIGNR polymorphism in 197 infants born to untreated HIV-1-infected mothers recruited in Harare, Zimbabwe. Among them, 97 infants were HIV-1-infected and 100 infants remained uninfected. Of the 97 HIV-1-infected infants, 57 were infected IU, 11 were infected IP, and 17 were infected PP. Timing of infection was not determined for 12 HIV-1-infected infants. Baseline characteristics of mothers and infants are presented in Table 1 . Maternal age and CD4 cell count, child sex, mode of delivery, duration of membrane rupture and gestational age were similar among all groups. However, maternal viral load .29 000 copies/ml was associated with increased risk in both IU and PP with odds ratios (OR) of 3.64 (95% CI = 1.82-7.31, P = 0.0002) and 4.45 (95% CI = 1.50-13.2, P = 0.0045) for HIV-1 transmission, respectively. Fifteen haplotype-tagged SNPs (htSNPs) corresponding to the 15 major DC-SIGNR haplotypes ( Figure 1 ) described among Zimbabweans [7] were genotyped in our study samples (Tables S2 and S3 ). H1 (31%) and H3 (11%) were the most frequent haplotypes observed (Figure 1 ). Being homozygous for the H1 haplotype was associated with increased risk of both IU (OR: 4.42, P = 0.022) and PP (OR: 7.31, P = 0.016) HIV-1 transmission ( Table 2) . Infants harbouring two copy combinations of H1 and/ or H3 haplotypes (H1-H1, H1-H3 or H3-H3) had increased risk of IU (OR: 3.42, P = 0.007) and IP (OR: 5.71, P = 0.025) but not PP (P = 0.098) HIV-1 infection compared to infant noncarriers ( Table 2 ). The latter associations remained significant after adjustment was made for the maternal viral load for both IU (OR: 3.57, 95% CI = 1.30-9.82, P = 0.013) and IP (OR: 5.71, 95% CI = 1.40-23.3, P = 0.025) HIV-1 transmission. The H1 and H3 haplotypes share a cluster of mutations (p-198A, int2-391C, int2-180A, ex4RPT, int5+7C) ( Figure 1 ). Of these, the p-198A and int2-180A variants were significantly associated with MTCT of HIV-1 (Table S2 ). In the unadjusted regression analysis, homozygous infants for the p-198A and int2-180A variants had increased risk of IU (OR: 2.07 P = 0.045, OR: 3.78, P = 0.003, respectively) and IP (OR: 2.47, P = 0.17, O.R: 5.71, P = 0.025, respectively) HIV-1 infection compared to heterozygote infants or noncarriers (Table 3) . When adjustment was made for maternal factors, only the association with the int2-180A variant remained significant for IU (OR: 3.83, 95% CI = 1.42-10.4, P = 0.008) and IP (O.R: 5.71, 95% CI = 1.40-23.3, P = 0.025) HIV-1 transmission. Thus, infants homozygous for DC-SIGNR variant int2-180A contained in H1 and H3 haplotypes were 4-fold to 6-fold more likely to be infected by HIV-1 during pregnancy or at delivery, respectively. Alternative splicing of the DC-SIGNR gene in the placenta produces both membrane-bound and soluble isoform repertoires [3] . The relative proportion of membrane bound and soluble DC-SIGNR could plausibly influence the susceptibility to HIV-1 infection [11] . We therefore hypothesized that the DC-SIGNR mutations associated with MTCT of HIV-1 would have an impact on both the level of DC-SIGNR expression and in the isoform repertoire produced. We investigated DC-SIGNR transcript expression in first-term placentas obtained after elective abortion. We cloned DC-SIGNR from placental tissues by RT-PCR from 3 homozygous H1 samples containing both the DC-SIGNR p-198AA and int2-180AA variants associated with HIV-1 transmission and 3 homozygous wild-type (WT) (p-198CC, int2-180GG) samples. Fifteen clones per sample were randomly selected for sequencing. As expected, we found an extensive repertoire of DC-SIGNR transcripts in all samples with 9 to 16 different isoforms per individual. A total of 65 distinct transcripts were identified ( Figure S1 ), of which 3 were full-length transcripts. 64 of the sequenced clones contained a total of 69 amino acid substitutions with 3 new C termini and 2 premature stop codons. However, the diversity was mostly attributable to the entire deletion of exon 2 or exon 3 or to variations in the length of the neck region (exon 4) of DC-SIGNR. The deletion of exon 3 eliminates the trans-membrane domain of the protein and leads to the expression of soluble DC-SIGNR isoforms [3] . Interestingly, the abundance of membrane-bound isoforms in placental tissues of the H1 homozygotes appears to be lower than that observed in samples from WT individuals ( Figure S1 ). The deletion of exon 3 was confirmed by sequencing and we hypothesize that the skipping of exon 3, could be due to the presence of the int2-180A mutation observed in infants with the H1 haplotype. In fact, this intron mutation is located 180 bp downstream from exon 3 and potentially modifies splicing events (Figure 2A ). We confirmed that the variation in transcript proportions seen between the two groups was also reflected at the level of mRNA expression in the placenta. To quantify membrane-bound vs soluble isoforms in placental samples from homozygous H1 and WT infants, we amplified the exon 5 (E5) sequence present in all DC-SIGNR isoforms (total transcripts). We then amplified exon 3 (E3) which is deleted in the soluble forms and then calculated the E3:E5 ratio. We found that placental tissues from homozygous H1 infants express a significantly lower proportion of membrane-bound DC-SIGNR (18%) compared to that in WT individuals (36%) (P = 0.004) ( Figure 2B ) suggesting that exon 3 skipping happens more frequently in presence of the DC-SIGNR int2-180A variant associated with MTCT of HIV-1. The DC-SIGNR int2-180A variant is always transmitted with the promoter mutation p-198A (Figure 1 ). In the unadjusted regression analysis, the p-198A variant was significantly associated with IU but not with IP and PP HIV-1 transmission (Table 3) . Computational transcription factor binding site analysis predicts Table 1 . Baseline characteristics of mother and infants risk factors for intrauterine (IU), intrapartum (IP) and postpartum (PP) mother-to-child HIV-1 transmission. Figure 3A ). The luciferase activity of the p-198A variant construct was significantly lower than that of the WT p-198C promoter construct (p-198C/A ratio = 2, P = 0.006) ( Figure 3B ) suggesting that DC-SIGNR p-198A affects promoter activity. The other promoter mutants (p-577C and p-323A) observed in the Zimbabwean population did not affect DC-SIGNR transcription in this assay ( Figure S2 ). To determine the net impact of the DC-SIGNR p-198A mutation on DC-SIGNR expression in the placenta, we quantitated the absolute number of total and membrane-bound DC-SIGNR transcripts in the H1 homozygote and wild-type placental samples as described earlier. The total number of DC-SIGNR transcripts was determined to be 6856213 (DC-SIGNR copies6S.E.M per 10 5 GAPDH copies) in the placental samples from homozygous H1 infants and was 4-fold lower compared to that found in placentas from WT individuals (27816638, P = 0.011) ( Figure 3C ). As suggested earlier, the int2-180A mutation might induce exon 3 skipping leading to a lower production of membrane-bound DC-SIGNR. Although, the decrease in the total number of DC-SIGNR transcripts in H1 homozygous placental samples containing both the p-198AA and int2-180AA variants affected the proportion of membrane-bound and soluble isoforms, the effect of these mutations was more pronounced on the membrane-bound isoforms with an 8-fold decrease (H1 = 117636.2 vs WT = 9906220.6, P = 0.003) compared to a 3-fold decrease in total soluble isoforms (H1 = 5686181.9 vs WT = 19256495.3, P = 0.03) ( Figure 3C ). Therefore, DC-SIGNR p-198A and int2-180A mutations associated with MTCT of HIV-1 significantly decreased the level of total placental DC-SIGNR transcripts, disproportionately affecting the membrane-bound isoform production. Table 3 . Associations between infant DC-SIGNR promoter p-198 and intron 2 (int2)-180 variants and intrauterine (IU), intrapartum (IP) and postpartum (PP) mother-to-child HIV-1 transmission. Our genetic results, supported by expression assay in placenta, suggest the involvement of DC-SIGNR in MTCT of HIV-1. Homozygosity for the haplotype H1 was associated with IU transmission in the unadjusted regression analysis. However, the association disappeared after adjustment was made for the maternal factors presumably because of the small number of H1 homozygote infants analysed in each groups. H1 and H3 were the most frequent haplotypes observed in the study population and they share a cluster of mutations (Figure 1 ). Grouping haplotypes H1 and H3 increased the power of the study and permitted the identification of specific DC-SIGNR mutations associated with MTCT of HIV-1. Indeed, two mutations shared by haplotypes H1 and H3 were associated with vertical transmission of HIV-1. The int2-180A was associated with a 4-fold increased risk of IU and 6fold increased risk of IP after adjustment for the maternal factors. Although the p-198A variant was associated with IU transmission, the association disappeared after adjustment was made for the maternal viral load. Nevertheless, we showed that this mutation reduces DC-SIGNR transcriptional activity in vitro and produces lower level of DC-SIGNR transcripts in placental tissues in combination with the int2-180A variant. Since int2-180A is always transmitted with p-198A on the MTCT associated combined haplotypes H1/H3, whereas p-198A is carried on other nonassociated haplotypes (Figure 1) , we can speculate that the p-198A mutation alone may have a minor effect in vivo whereas in combination with the int2-180A variant, they both act to reduce the level of placental DC-SIGNR expression resulting in an increased risk of MTCT of HIV-1. The majority of IU transmission occurs during the last trimester of pregnancy (reviewed in [12] ). Full-term placenta samples were not available for the current study and the expression assays were performed on first-term placental tissues. A previous study looking at DC-SIGNR placental isoforms repertoire in full-term placenta samples demonstrated similar diversity of DC-SIGNR transcripts as in the first-term placental tissues studied herein [3] . However, since levels of DC-SIGNR expression have never been compared between the different terms of pregnancy, it is not known whether DC-SIGNR expression varies during the course of pregnancy. Nevertheless, it is reasonable to assume that the inter-individual differences in both DC-SIGNR isoform repertoire and transcript levels observed between the H1 and WT homozygous infants would be reflected throughout the pregnancy. To date, most studies have focused on the potential role of DC-SIGNR in trans infection of HIV-1 in vitro [2, 10] . However, the multiple mechanisms involved in trans infection and redundancy among C-type lectin functions make it difficult to determine the actual participation of DC-SIGNR in this mode of infection in vivo [13, 14] . The strong correlation we observed between MTCT of HIV-1 and DC-SIGNR genetic variants producing low levels of DC-SIGNR in the placenta suggested that mechanisms other than DC-SIGNR-mediated trans infection might operate during vertical transmission of HIV-1. For example, DC-SIGNR has also been shown to function as a HIV-1 antigen-capturing receptor [15] . Chan and colleagues recently demonstrated that DC-SIGNR transfected CHO cells diminish SARS-CoV titers by enhanced capture and degradation of the virus in a proteasome-dependent manner [4] . Since endothelial cells express MHC-I and II, degraded viral antigens could then be presented to immune cells to elicit an adaptive immune response [16, 17] . The HIV-1 coreceptor CCR5, but not CD4, is co-expressed with DC-SIGNR on placental and blood-brain barrier (BBB) endothelial cells [18, 19] . HIV-1 gp120 binding to CCR5 receptor on endothelial cells compromises BBB integrity and enhances monocytes adhesion and transmigration across the BBB [20, 21] . It is thus possible that reduced expression of DC-SIGNR, particularly the membranebound isoforms, on placental capillary endothelial cells might favour HIV-1 binding to CCR5 receptor, instead of DC-SIGNR receptor, facilitating the migration of maternal HIV-1-infected cells across the placental barrier resulting in IU transmission of HIV-1. The int2-180A variant contained in the H1 and H3 haplotypes was associated with IP transmission suggesting that DC-SIGNR also affect transmission of HIV-1 during delivery. Little is known about the mechanisms underlying transmission of HIV-1 during delivery. Passage through the birth canal could potentially expose infants through a mucosal portal entry (presumably ophthalmic, skin, or gastrointestinal), whereas placental insult during delivery (physical or inflammatory) may enhance transplacental passage of maternal HIV-1-infected cells into foetal circulation [22, 23] . Such process called microtransfusion has been proposed in regards to the results obtain in a Malawian cohort. Kweik and colleagues found a significant association between levels of maternal DNA in umbilical cord blood and IP transmission of HIV-1 suggesting that passage of maternal infected cells through the placenta is likely to occur during delivery [22] . Thus, in a similar fashion as suggested earlier for IU transmission, the relatively lower level of DC-SIGNR in the placenta of homozygous infants harbouring the int2-180A variant could promote HIV-1 binding to CCR5 receptor on endothelial cells affecting the placental barrier integrity and facilitating the passage of maternal infected cells in foetal circulation during delivery. Beside DC-SIGNR, other HIV-1 receptors are known to influence MTCT of HIV-1 (reviewed in [24] ). Genetic variants in CCR5 have been shown to influence vertical transmission of HIV-1. CCR5 promoter variants resulting in higher expression of the receptor were associated with increased risk of MTCT of HIV-1 among sub-Saharan Africans [25, 26] . The 32-pb deletion polymorphism in CCR5 has be shown to protect from vertical transmission of HIV-1 [27] , but this variant is virtually absent among African populations [28] . High copy numbers of CCL3L1, a potent HIV-1 suppressive ligand for CCR5, are associated with higher chemokine production and lower risk of MTCT of HIV-1 among South African infants [29, 30] . Mannose-binding lectin (MBL) is an innate immune receptor synthesised in the liver and secreted in the bloodstream in response to inflammation signal. MBL promotes pathogen elimination by opsonization and phagocytosis, and reduced expression of MBL resulting from polymorphism in coding and non-coding regions has been associated with an increased risk of MTCT of HIV-1 [31, 32] . In this study, we demonstrate for the first time, the potential functional impact of DC-SIGNR mutations on its expression in the placenta and in vertical transmission of HIV-1. We believe that the presence of DC-SIGNR at the placental endothelial cell surface may protect infants from HIV-1 infection by capturing virus and promoting its degradation/presentation. However, in placenta containing low levels of DC-SIGNR, HIV-1 would preferentially binds CCR5 on endothelial cells resulting in a loss of placental barrier integrity and enhanced passage of maternal HIV-1-infected cells in foetal circulation leading to MTCT of HIV-1. This mechanism may also apply to other vertically-transmitted pathogens known to interact with DC-SIGNR such as HIV-2, hepatitis C and dengue viruses and warrant further investigation. Associations between child DC-SIGNR exon 4 repeated region genotypes and mother-to-child HIV-1 transmission.CI, Confidence interval; N, number; NA; not applicable; OR, odds ratio a P-value as determined by the Chi-square test. b Comparison between genotype and all others. Found at: doi:10.1371/journal.pone.0007211.s003 (0.05 MB DOC) Figure S1 DC-SIGNR transcripts repertoire in placenta. Major RT-PCR products from RNA extract from 3 homozygous H1 and 3 homozygous WT placenta samples were purified, cloned and sequenced. Sequenced were analysed according to NCBI reference sequence NM_014257. CT; cytoplasmic tail, TM; trans-membrane domain; WT; wild-type Found at: doi:10.1371/journal.pone.0007211.s004 (0.11 MB DOC) Figure S2 Effect of DC-SIGNR promoter variant on transcriptional activity in luciferase reporter assay in vitro in transfected HeLa cells. Relative luciferase expression from pGL2-Basic, parental vector without promoter. Expression DC-SIGNR promoter constructs, spanning p-577C variant or p-323A variant were calculated relatively to this value. Data are presented in mean values6S.E.M of three independent experiments performed in triplicate. One-way ANOVA test followed by the Dunnett test for multiple comparison was used to compare the relative luciferase expression of the p-557C and p-323A variant reporters against the wild-type (WT) construct (not significant). 0 mg, 0,5 mg or 1 mg CMV-Tat vector was transfected with LTR-Luc as a positive control in these experiments."
- text: "Approximately how many people died during the 1918-1919 influenza pandemic?"
context: "It is Unlikely That Influenza Viruses Will Cause a Pandemic Again Like What Happened in 1918 and 1919 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4019839/ Song, Liting 2014-05-07 DOI:10.3389/fpubh.2014.00039 License:cc-by Abstract: nan Text: Influenza and influenza viruses are wellknown popular topics to medical professionals and the general public. Influenza viruses had caused a pandemic globally during 1918 and 1919, and that influenza pandemic had taken away more than 20 million people's lives in the world. However, in my opinion, it is unlikely that influenza viruses will again cause a pandemic on a level (both of the morbidity rate and the mortality rate) comparable to what happened in 1918 and 1919. Influenza viruses very easily reassort, recombine, and point mutate in nature due to their segmented RNA genome structures, however, unlike highly pathogenic (virulent) viruses like rabies virus, Lassa fever virus, smallpox virus, eastern equine encephalitis virus, Ebola virus, Marburg virus, and human immunodeficiency virus 1 (HIV-1); most influenza viruses (wild types and mutants) are moderately pathogenic. The case fatality rates of some highly virulent viruses and related references are listed in Table 1 . On November 11, 1918 , the fighting of World War I was stopped, and World War I was officially ended on June 28, 1919 with the signing of the Versailles Treaty. It is estimated that around 8.5-10 million soldiers lost their lives in World War I due to battle. The war also directly caused more than 6 million civilian deaths. Millions of people suffered from hunger and malnutrition during the war. Malnutrition weakened the human immune system and made a person more vulnerable to infectious diseases like tuberculosis and influenza, therefore, hunger and malnutrition were indirectly responsible for millions of deaths in the world in that period of time. For example, about 700,000 Germans died from malnutrition-related diseases in the years of 1914-1918. During the 1918-1919 influenza pandemic, between 21 and 25 million people died of influenza worldwide. Those people were killed both directly and indirectly by influenza virus infections. Many families were too poor to buy food and coal, and to afford health care expenses when their family members were ill. Influenza virus could infect all members of a family, and this could result in no one left to feed the fires, and to prepare food for the whole family, even if they had firewood, coal, and food left in their homes. Sadly, a large number of people died of influenza virus infections along with starvation, cold, and poor living conditions (8) . In recent years, while hunger and malnutrition are not major and serious problems in some developed countries anymore, they are still very difficult to overcome in many developing countries. In these less-developed countries, there were approximately 925 million people who suffered from hunger; 125 million children were underweight; and 195 million children were stunted each year (9) . Nevertheless, in comparison to 1918 and 1919, currently, we have much better social and economic conditions and public health systems globally; and generally speaking, the majority of people in the world have better nutritional and educational statuses; better living and working conditions; therefore, better general health and immunity. Furthermore, in 1918 and 1919, physicians and nurses almost had nothing in their hands to help individuals who were infected by influenza viruses. Today, although we still do not have very effective, powerful, and practical anti-influenza drugs available, we at least have some improved, useful, and helpful anti-viral drugs like zanamivir, and effective, convenient anti-cold medicines like Tylenol or Advil. We do not have a universal vaccine to prevent all influenza virus infections, but we can make effective vaccines to a specific influenza virus strain in a short time. Actually, in the United States of America, the influenza classed mortality rate declined from 10.2/100,000 in the 1940s to 0.56/100,000 in the 1990s; and the classed mortality rates of 1957-1958 and 1968-1969 influenza pandemics were not remarkably different from the non-pandemic seasons (10) . Because of the above reasons, we can optimistically assume that even the same strain of influenza virus, which caused pandemic in 1918 and 1919, would not be able to kill millions of people and cause a pandemic comparable to the 1918-1919 pandemic again in the future. Additionally, a significant number of viruses can cause influenza-like syndromes, such as rhinovirus, parainfluenza virus, adenovirus, coronavirus, respiratory syncytial virus, Coxsackie B virus, echovirus, and metapneumovirus (11, 12) . Some of the above-mentioned viruses like adenovirus and mutated coronavirus could cause problems that are comparable to influenza viruses (13, 14) . The World Health Organization (WHO) mistakenly raised the level of influenza pandemic alert from phase 5 to the highest phase 6 on June 11, 2009 (15) . However, the truth was that most cases of H1N1 influenza A virus infections were mild, the symptomatic case fatality rate was only 0.005% in New Zealand (16) ; and in New York City, the case fatality rate was 0.0094-0.0147% for persons ≥65 years old, and for those of 0-17 years old, the case fatality rate was 0.0008-0.0012% (17) . Some researchers argued that it should not have been called an influenza pandemic in the first place if the clinical severity was considered (15, (18) (19) (20) . I believe it was unwise that we had paid too much www.frontiersin.org 23) . Not surprisingly, every year there would be some influenza patients and a few of them would die from the infections, as it is almost impossible to eliminate influenza viruses from the natural environment in many years. The severity of a viral infection is determined by both of the viral virulence (pathogenicity) and the host immunity. Some researchers' opinions on H7N9 avian influenza virus were incorrect and/or inadequate. They mainly focused on influenza viruses and worried about viral mutations, viral pathogenicity, viral adaptation, and transmission. They overestimated the negative part of socio-economic factors of the present east China: overcrowded population in the epidemic region; very busy national and international transportation and travel; a large number of live poultry markets . . . but they underestimated the currently changed, developed, and improved positive part of socio-economic factors in China. The following factors might be used to explain why that H7N9 influenza A virus epidemic was limited and controlled in China, and only a few immunocompromised patients were killed by H7N9 influenza A virus. First, China has a relatively organized and effective public health system, there are four levels of (national, provincial, prefectural-level city, and county) centers for disease control and prevention all over China (24) . Second, physicians and nurses in China were prepared and knowledgeable of influenza virus infections. Third, samples from patients with suspected influenza virus infections were collected and sent to the local and national centers for disease control and prevention promptly. H7N9 influenza A viruses were isolated and identified very quickly. Thereby, they were able to diagnose, confirm, and report three cases of H7N9 influenza patients in the early stage of the epidemic (24, 25) . Fourth, health care and public health workers were protected properly. Consequently, none of the health professionals was infected by H7N9 influenza A virus in 2013. However, a surgeon died of H7N9 influenza in Shanghai, China in January of 2014 (26) . Fifth, they detected H7N9 influenza A viruses from the samples of chickens, pigeons, and the environment of live poultry markets in Shanghai (27) ; and closed the live poultry markets of the involved epidemic region quickly. Sixth, patients were isolated and treated timely in hospitals, 74% (1251/1689) of those close contacts of H7N9 influenza patients were monitored and observed. Thus, H7N9 influenza A virus could not spread to a bigger population (24) . Last but not least, we are connected to the Internet now, and it seems that our planet is much smaller today than the earlier days when we did not have the Internet, because communication and information exchange have become so fast, easy, and convenient presently. During that avian influenza epidemic, some influenza experts in the world shared/exchanged H7N9 influenza A virus information and provided professional consultations and suggestions efficiently and rapidly. All these public health routine practices and measures resulted in that H7N9 influenza epidemic being controlled and stopped in China (24) . I have to point out that the cases of diagnosed H7N9 avian influenza A virus infection might only be the tip of the iceberg. Aside from one laboratory confirmed asymptotic case of H7N9 influenza A virus infection in Beijing (22), there were probably many undetected mild or asymptotic cases of influenza A H7N9 infection. The reason is that most people usually think a common cold is a very common and normal occurrence, and they don't take flu-like illnesses seriously. In most situations, they would just stay home and take some medicines. Only those who have very severe flu-like symptoms would see doctors, and thereby be detected and diagnosed, accordingly the real case fatality rate should be much lower than the detected 32.14% (45/140, one case from Taiwan, and one case from Hong Kong) (22, 23). Nowadays, we travel faster, and we travel more frequently and globally, and we have more complicated social activities and lifestyles, thereby increasing the chances of viral mutation; and we realize that influenza viruses are even easier to reassort, recombine, and mutate in nature than many other RNA viruses. However, we are now living in a technologically, economically, and socially much better and advanced society. I believe influenza virus infections are controllable and preventable, with the increased population health and immunity, with the WHO Global Influenza Surveillance and Response System, and with standard/routine epidemiological practices, and with new effective anti-viral agents and vaccines in production in the future. Now, I first predict that influenza viruses will unlikely again cause a pandemic on a level comparable to what happened in 1918 and 1919. Hopefully, one day we could consider a strategy to produce a universal vaccine that can prevent people from infections of all influenza virus strains, or we could produce some very effective anti-influenza virus drugs; then influenza would not be a problem anymore. We should learn lessons from the mistakes we made in the past. It is reasonable and necessary to be cautious about influenza viruses, but overreactions or catastrophic reactions should be avoided in the future. My opinion is anti-traditional; the purpose of this article is to influence public health policy, and to save some of the limited resources and money for more important diseases like heart diseases, cancer, diabetes, AIDS, hepatitises, and tuberculosis (15) . Liting Song: conception of manuscript, drafting of manuscript, critical revision of manuscript, and final approval of manuscript. The author would like to recognize the contributions of the reviewers and editors of this manuscript for their corrections and editing, and Dr. Emanuel Goldman for correcting errors related to grammar and syntax of the final manuscript."
---
# Model Card for Model longluu/Medical-QA-deberta-MRQA-COVID-QA
The model is an extractive Question Answering algorithm that can find an answer to a question by finding a segment in a text.
## Model Details
### Model Description
The base pretrained model is DeBERTa-v3-Large-MRQA (https://huggingface.co/VMware/deberta-v3-large-mrqa) which was fine-tuned on a large QA dataset, MRQA (https://huggingface.co/datasets/mrqa).
Then using the COVID-QA dataset (https://huggingface.co/datasets/covid_qa_deepset), I fine-tuned the model for an extractive Question Answering algorithm that can answer
a question by finding it within a text.
### Model Sources [optional]
The github code associated with the model can be found here: https://github.com/longluu/Medical-QA-extractive.
## Training Details
### Training Data
This dataset contains 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles regarding COVID-19 and other medical issues.
The dataset can be found here: https://github.com/deepset-ai/COVID-QA. The preprocessed data can be found here https://huggingface.co/datasets/covid_qa_deepset.
#### Training Hyperparameters
The hyperparameters are --per_device_train_batch_size 2 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 512 \
--doc_stride 250 \
--max_answer_length 200 \
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model was trained and validated on train and validation sets.
#### Metrics
Here we use 2 metrics for QA tasks exact match and F-1.
### Results
{'exact_match': 34.653465, 'f1': 58.858354}
## Model Card Contact
Feel free to reach out to me at thelong20.4@gmail.com if you have any question or suggestion.
|
Exidna/my-dialoGPT
|
Exidna
| 2024-02-16T19:50:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-16T19:50:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sayhan/Trendyol-LLM-7b-base-v0.1-GGUF
|
sayhan
| 2024-02-16T19:49:57Z | 76 | 3 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"tr",
"en",
"base_model:Trendyol/Trendyol-LLM-7b-base-v0.1",
"base_model:quantized:Trendyol/Trendyol-LLM-7b-base-v0.1",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-02-07T10:33:05Z |
---
base_model: Trendyol/Trendyol-LLM-7b-base-v0.1
language:
- tr
- en
pipeline_tag: text-generation
license: apache-2.0
model_type: llama
library_name: transformers
inference: false
---
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v0.1/resolve/main/llama-tr-image.jpeg"
alt="drawing" width="400"/>
## Trendyol LLM 7b base v0.1
- **Model creator:** [Trendyol](https://huggingface.co/Trendyol)
- **Original model:** [Trendyol-LLM-7b-base-v0.1](https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v0.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Trendyol's Trendyol LLM 7b base v0.1](https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v0.1)
<!-- description end -->
# Quantization methods
| quantization method | bits | size | use case | recommended |
|---------------------|------|----------|-----------------------------------------------------|-------------|
| Q2_K | 2 | 2.59 GB | smallest, significant quality loss - not recommended for most purposes | ❌ |
| Q3_K_S | 3 | 3.01 GB | very small, high quality loss | ❌ |
| Q3_K_M | 3 | 3.36 GB | very small, high quality loss | ❌ |
| Q3_K_L | 3 | 3.66 GB | small, substantial quality loss | ❌ |
| Q4_0 | 4 | 3.9 GB | legacy; small, very high quality loss - prefer using Q3_K_M | ❌ |
| Q4_K_M | 4 | 4.15 GB | medium, balanced quality - recommended | ✅ |
| Q5_0 | 5 | 4.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M | ❌ |
| Q5_K_S | 5 | 4.73 GB | large, low quality loss - recommended | ✅ |
| Q5_K_M | 5 | 4.86 GB | large, very low quality loss - recommended | ✅ |
| Q6_K | 6 | 5.61 GB | very large, extremely low quality loss | ❌ |
| Q8_0 | 8 | 13.7 GB | very large, extremely low quality loss - not recommended | ❌ |
|
LoneStriker/LWM-Text-Chat-512K-6.0bpw-h6-exl2
|
LoneStriker
| 2024-02-16T19:44:46Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-16T19:42:33Z |
---
inference: false
---
<br>
<br>
# LWM-Text-Chat-512K Model Card
## Model details
**Model type:**
LWM-Text-Chat-512K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-512K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 3500 subset of Books3 documents with 500K to 1M tokens
|
sanchit-gandhi/mms-lid-ft-mix
|
sanchit-gandhi
| 2024-02-16T19:41:04Z | 153 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/mms-lid-126",
"base_model:finetune:facebook/mms-lid-126",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-02-16T18:15:36Z |
---
license: cc-by-nc-4.0
base_model: facebook/mms-lid-126
tags:
- audio-classification
- generated_from_trainer
model-index:
- name: mms-lid-ft-mix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-lid-ft-mix
This model is a fine-tuned version of [facebook/mms-lid-126](https://huggingface.co/facebook/mms-lid-126) on the sanchit-gandhi/vctk dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.1
|
LoneStriker/LWM-Text-Chat-512K-4.0bpw-h6-exl2
|
LoneStriker
| 2024-02-16T19:40:39Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-16T19:39:05Z |
---
inference: false
---
<br>
<br>
# LWM-Text-Chat-512K Model Card
## Model details
**Model type:**
LWM-Text-Chat-512K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-512K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 3500 subset of Books3 documents with 500K to 1M tokens
|
LoneStriker/LWM-Text-Chat-512K-3.0bpw-h6-exl2
|
LoneStriker
| 2024-02-16T19:39:04Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-16T19:37:44Z |
---
inference: false
---
<br>
<br>
# LWM-Text-Chat-512K Model Card
## Model details
**Model type:**
LWM-Text-Chat-512K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-512K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 3500 subset of Books3 documents with 500K to 1M tokens
|
LoneStriker/LWM-Text-Chat-512K-GGUF
|
LoneStriker
| 2024-02-16T19:37:43Z | 0 | 1 | null |
[
"gguf",
"region:us"
] | null | 2024-02-16T19:21:19Z |
---
inference: false
---
<br>
<br>
# LWM-Text-Chat-512K Model Card
## Model details
**Model type:**
LWM-Text-Chat-512K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-512K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 3500 subset of Books3 documents with 500K to 1M tokens
|
espoir/BioGPT-Large-QA-PubMedQA
|
espoir
| 2024-02-16T19:36:39Z | 150 | 0 |
transformers
|
[
"transformers",
"pytorch",
"biogpt",
"text-generation",
"en",
"dataset:pubmed_qa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-15T22:03:29Z |
---
datasets:
- pubmed_qa
language:
- en
pipeline_tag: text-generation
---
Fine-tuned BioGPT-Large for question-answering task on PubMedQA!
The base Model is [here](https://github.com/microsoft/BioGPT)
I will add example later, this still a work in progress as of now!
|
vapogore/es_neutral
|
vapogore
| 2024-02-16T19:32:27Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"transformation",
"generated_from_trainer",
"base_model:somosnlp-hackathon-2022/es_text_neutralizer",
"base_model:finetune:somosnlp-hackathon-2022/es_text_neutralizer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-16T19:30:10Z |
---
license: apache-2.0
base_model: hackathon-pln-es/es_text_neutralizer
tags:
- transformation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: es_neutral
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# es_neutral
This model is a fine-tuned version of [hackathon-pln-es/es_text_neutralizer](https://huggingface.co/hackathon-pln-es/es_text_neutralizer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0665
- Bleu: 87.5423
- Gen Len: 18.5417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 440 | 0.0719 | 87.4629 | 18.5417 |
| 0.0256 | 2.0 | 880 | 0.0665 | 87.5423 | 18.5417 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Oysiyl/w2v-bert-2.0-dutch-colab-CV16.0
|
Oysiyl
| 2024-02-16T19:30:14Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:ylacombe/w2v-bert-2.0",
"base_model:finetune:ylacombe/w2v-bert-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-15T17:09:24Z |
---
base_model: ylacombe/w2v-bert-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-2.0-dutch-colab-CV16.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-dutch-colab-CV16.0
This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0920
- Wer: 0.0573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.3106 | 1.0 | 358 | 0.1833 | 0.1350 |
| 0.0924 | 2.0 | 716 | 0.1365 | 0.0932 |
| 0.0521 | 3.0 | 1074 | 0.1121 | 0.0732 |
| 0.033 | 3.99 | 1432 | 0.0957 | 0.0619 |
| 0.0221 | 4.99 | 1790 | 0.0920 | 0.0573 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.15.2
|
unalignment/weeeeee.0
|
unalignment
| 2024-02-16T19:29:55Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-10T16:02:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MonkeyDdonut/dqn-SpaceInvadersNoFrameskip-v4-tv2
|
MonkeyDdonut
| 2024-02-16T19:24:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-16T19:23:42Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 514.00 +/- 224.22
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MonkeyDdonut -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MonkeyDdonut -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MonkeyDdonut
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Lienid/nous-finetune-model
|
Lienid
| 2024-02-16T19:20:10Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2307.09288",
"arxiv:2306.05685",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-16T04:49:13Z |
---
inference: false
license: llama2
---
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture
- **License:** Llama 2 Community License Agreement
- **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api
## Training Details
Vicuna v1.5 is fine-tuned from Llama 2 with supervised instruction fine-tuning.
The training data is around 125K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation

Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
AntoineGourru/Mistral_qlora_telecom_R256A512BS2E2
|
AntoineGourru
| 2024-02-16T19:14:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-02-16T19:13:36Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0
|
LoneStriker/LWM-Text-Chat-256K-6.0bpw-h6-exl2
|
LoneStriker
| 2024-02-16T19:13:32Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-16T19:11:21Z |
---
inference: false
---
<br>
<br>
# LWM-Text-Chat-256K Model Card
## Model details
**Model type:**
LWM-Text-Chat-256K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-256K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 37K subset of Books3 documents with 200K to 500K tokens
|
LoneStriker/LWM-Text-Chat-256K-4.0bpw-h6-exl2
|
LoneStriker
| 2024-02-16T19:09:28Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-16T19:07:37Z |
---
inference: false
---
<br>
<br>
# LWM-Text-Chat-256K Model Card
## Model details
**Model type:**
LWM-Text-Chat-256K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-256K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 37K subset of Books3 documents with 200K to 500K tokens
|
lvcalucioli/llamantino7b_2_10_question-answering
|
lvcalucioli
| 2024-02-16T18:58:46Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"base_model:swap-uniba/LLaMAntino-2-7b-hf-ITA",
"base_model:adapter:swap-uniba/LLaMAntino-2-7b-hf-ITA",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-02-16T18:54:45Z |
---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
metrics:
- rouge
base_model: swap-uniba/LLaMAntino-2-7b-hf-ITA
model-index:
- name: llamantino7b_2_10_question-answering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llamantino7b_2_10_question-answering
This model is a fine-tuned version of [swap-uniba/LLaMAntino-2-7b-hf-ITA](https://huggingface.co/swap-uniba/LLaMAntino-2-7b-hf-ITA) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9768
- Rouge1: 19.35
- Rouge2: 10.89
- Rougel: 18.01
- Rougelsum: 18.19
- R: 16.06
- Gen Len: 1.0
- R@1: 0.0
- R@3: 0.0
- R@5: 0.0
- R@10: 0.0
- R@20: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | R | Gen Len | R@1 | R@3 | R@5 | R@10 | R@20 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-----:|:-------:|:---:|:---:|:---:|:----:|:----:|
| 1.7082 | 1.0 | 23 | 1.4263 | 16.0 | 5.83 | 14.16 | 14.7 | 11.97 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1279 | 2.0 | 46 | 1.3286 | 17.65 | 8.25 | 16.23 | 16.85 | 14.02 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.6508 | 3.0 | 69 | 1.3710 | 18.49 | 9.03 | 16.79 | 17.44 | 14.74 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3175 | 4.0 | 92 | 1.4644 | 19.18 | 9.83 | 17.33 | 17.52 | 15.42 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1619 | 5.0 | 115 | 1.5859 | 18.93 | 10.18 | 17.36 | 17.62 | 15.47 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0925 | 6.0 | 138 | 1.6658 | 19.06 | 10.78 | 17.65 | 17.86 | 15.81 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0571 | 7.0 | 161 | 1.7225 | 18.85 | 10.28 | 17.28 | 17.54 | 15.45 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0279 | 8.0 | 184 | 1.8907 | 18.94 | 10.96 | 17.61 | 17.74 | 15.82 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0172 | 9.0 | 207 | 1.9558 | 19.32 | 10.79 | 18.01 | 18.18 | 16.02 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0127 | 10.0 | 230 | 1.9768 | 19.35 | 10.89 | 18.01 | 18.19 | 16.06 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.2
|
sleepyallover/image_classification
|
sleepyallover
| 2024-02-16T18:57:02Z | 20 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T15:38:10Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5125
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4938
- Accuracy: 0.5125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.5004 | 0.4313 |
| No log | 2.0 | 80 | 1.6401 | 0.3937 |
| No log | 3.0 | 120 | 1.4170 | 0.4813 |
| No log | 4.0 | 160 | 1.5242 | 0.4813 |
| No log | 5.0 | 200 | 1.5319 | 0.5062 |
| No log | 6.0 | 240 | 1.5648 | 0.5125 |
| No log | 7.0 | 280 | 1.3638 | 0.5687 |
| No log | 8.0 | 320 | 1.7237 | 0.4875 |
| No log | 9.0 | 360 | 1.5765 | 0.5188 |
| No log | 10.0 | 400 | 1.5778 | 0.475 |
| No log | 11.0 | 440 | 1.6630 | 0.5062 |
| No log | 12.0 | 480 | 1.7094 | 0.525 |
| 0.5436 | 13.0 | 520 | 1.5787 | 0.55 |
| 0.5436 | 14.0 | 560 | 1.7870 | 0.5188 |
| 0.5436 | 15.0 | 600 | 1.5583 | 0.5563 |
| 0.5436 | 16.0 | 640 | 1.7809 | 0.525 |
| 0.5436 | 17.0 | 680 | 1.7417 | 0.4875 |
| 0.5436 | 18.0 | 720 | 1.6902 | 0.5375 |
| 0.5436 | 19.0 | 760 | 1.6704 | 0.55 |
| 0.5436 | 20.0 | 800 | 1.6843 | 0.5625 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
magnolia-psychometrics/bandura-v1
|
magnolia-psychometrics
| 2024-02-16T18:56:35Z | 129 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-02-16T18:46:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RozaBella/Milesa
|
RozaBella
| 2024-02-16T18:54:17Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:SG161222/Realistic_Vision_V1.4",
"base_model:adapter:SG161222/Realistic_Vision_V1.4",
"region:us"
] |
text-to-image
| 2024-02-16T18:53:42Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
a girl wearing a winter coat, scarf, eyeglasses, and smile, in Zermatt,
Switzerland, photorealistic:1.4, masterpiece, 4k high res, UHD
parameters:
negative_prompt: >-
verybadimagenegative_v1.3, ng_deepnegative_v1_75t, (ugly
face:0.8),cross-eyed,sketches, (worst quality:2), (low quality:2), (normal
quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin
spots, acnes, skin blemishes, bad anatomy, DeepNegative, facing away,
tilted head, {Multiple people}, lowres, bad anatomy, bad hands, text,
error, missing fingers, extra digit, fewer digits, cropped, worstquality,
low quality, normal quality, jpegartifacts, signature, watermark,
username, blurry, bad feet, cropped, poorly drawn hands, poorly drawn
face, mutation, deformed, worst quality, low quality, normal quality, jpeg
artifacts, signature, watermark, extra fingers, fewer digits, extra limbs,
extra arms,extra legs, malformed limbs, fused fingers, too many fingers,
long neck, cross-eyed,mutated hands, polar lowres, bad body, bad
proportions, gross proportions, text, error, missing fingers, missing
arms, missing legs, extra digit, extra arms, extra leg, extra foot,
((repeating hair)), extra fingers, bad fingers,many models,
output:
url: images/_image (18).png
base_model: SG161222/Realistic_Vision_V1.4
instance_prompt: Milesa
---
# Milesa
<Gallery />
## Trigger words
You should use `Milesa` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/RozaBella/Milesa/tree/main) them in the Files & versions tab.
|
NilanE/karasu-web-2
|
NilanE
| 2024-02-16T18:48:23Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:lightblue/karasu-1.1B",
"base_model:finetune:lightblue/karasu-1.1B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T18:47:00Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: lightblue/karasu-1.1B
---
# Uploaded model
- **Developed by:** NilanE
- **License:** apache-2.0
- **Finetuned from model :** lightblue/karasu-1.1B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlisaMenekse/BCPErrorCategoriesModel25k
|
AlisaMenekse
| 2024-02-16T18:47:41Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T18:47:37Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
lrzjason/anime_portrait_vit
|
lrzjason
| 2024-02-16T18:40:38Z | 178 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"anime",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T18:15:46Z |
---
license: apache-2.0
tags:
- anime
---
Trained a vit model to do classification on anime dataset.
Divided into four categories: head_only, upperbody, knee_level, fullbody
+ head_only

+ upperbody

+ knee_level

+ fullbody

```
from datasets import load_dataset
from PIL import Image
from transformers import ViTImageProcessor, ViTForImageClassification, TrainingArguments, Trainer
import torch
import numpy as np
from datasets import load_metric
import os
import shutil
model_name_or_path = 'lrzjason/anime_portrait_vit'
image_processor = ViTImageProcessor.from_pretrained(model_name_or_path)
model = ViTForImageClassification.from_pretrained(model_name_or_path)
input_dir = '/path/to/dir'
file = 'example.jpg'
image = Image.open(os.path.join(input_dir, file))
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(f'predicted_label: {model.config.id2label[predicted_label]}')
```
Using this dataset:
https://huggingface.co/datasets/animelover/genshin-impact-images
|
LoneStriker/LWM-Text-Chat-128K-4.0bpw-h6-exl2
|
LoneStriker
| 2024-02-16T18:38:11Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-16T18:36:37Z |
---
inference: false
---
<br>
<br>
# LWM-Text-Chat-128K Model Card
## Model details
**Model type:**
LWM-Text-Chat-128K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-128K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 92K subset of Books3 documents with 100K to 200K tokens
|
songbo/rg_model
|
songbo
| 2024-02-16T18:37:56Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-16T18:28:34Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: rg_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rg_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.0
- Tokenizers 0.14.1
|
LoneStriker/LWM-Text-Chat-128K-3.0bpw-h6-exl2
|
LoneStriker
| 2024-02-16T18:36:37Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-16T18:35:14Z |
---
inference: false
---
<br>
<br>
# LWM-Text-Chat-128K Model Card
## Model details
**Model type:**
LWM-Text-Chat-128K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-128K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 92K subset of Books3 documents with 100K to 200K tokens
|
LoneStriker/LWM-Text-Chat-128K-GGUF
|
LoneStriker
| 2024-02-16T18:35:11Z | 7 | 2 | null |
[
"gguf",
"region:us"
] | null | 2024-02-16T18:18:43Z |
---
inference: false
---
<br>
<br>
# LWM-Text-Chat-128K Model Card
## Model details
**Model type:**
LWM-Text-Chat-128K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-128K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 92K subset of Books3 documents with 100K to 200K tokens
|
carlosug/facebook-7b-text-to-sql
|
carlosug
| 2024-02-16T18:29:43Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"opt",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"license:other",
"region:us"
] | null | 2024-02-16T16:49:49Z |
---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: facebook/opt-350m
model-index:
- name: facebook-7b-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook-7b-text-to-sql
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
songbo/dst_model
|
songbo
| 2024-02-16T18:27:44Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-16T12:06:18Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: dst_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dst_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.0
- Tokenizers 0.14.1
|
OsherElhadad/ppo-Pyramids
|
OsherElhadad
| 2024-02-16T18:26:17Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-02-16T18:25:13Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: OsherElhadad/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chasedreaminf/Dream-7B-slerp
|
chasedreaminf
| 2024-02-16T18:19:41Z | 49 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"ignos/Mistral-T5-7B-v1",
"Toten5/Marcoroni-neural-chat-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T18:17:42Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- ignos/Mistral-T5-7B-v1
- Toten5/Marcoroni-neural-chat-7B-v2
---
# Dream-7B-slerp
Dream-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [ignos/Mistral-T5-7B-v1](https://huggingface.co/ignos/Mistral-T5-7B-v1)
* [Toten5/Marcoroni-neural-chat-7B-v2](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v2)
## 🧩 Configuration
\```yaml<_io.TextIOWrapper name='./merge/mergekit_config.yml' mode='r' encoding='utf-8'>\```
|
LoneStriker/LWM-Text-Chat-1M-GPTQ
|
LoneStriker
| 2024-02-16T18:11:02Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-02-16T18:09:13Z |
---
inference: false
---
<br>
<br>
# LWM-Text-1M-Chat Model Card
## Model details
**Model type:**
LWM-Text-1M-Chat is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-1M-Chat was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 800 subset of Books3 documents with 1M plus tokens
|
LoneStriker/LWM-Text-Chat-1M-4.0bpw-h6-exl2
|
LoneStriker
| 2024-02-16T17:53:35Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-16T17:51:58Z |
---
inference: false
---
<br>
<br>
# LWM-Text-1M-Chat Model Card
## Model details
**Model type:**
LWM-Text-1M-Chat is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-1M-Chat was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 800 subset of Books3 documents with 1M plus tokens
|
eurekalabdawara/image_classification
|
eurekalabdawara
| 2024-02-16T17:52:27Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T16:05:47Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2530
- Accuracy: 0.5375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.9208 | 0.25 |
| No log | 2.0 | 80 | 1.5773 | 0.425 |
| No log | 3.0 | 120 | 1.4861 | 0.4188 |
| No log | 4.0 | 160 | 1.4287 | 0.4813 |
| No log | 5.0 | 200 | 1.3897 | 0.5 |
| No log | 6.0 | 240 | 1.3243 | 0.525 |
| No log | 7.0 | 280 | 1.3144 | 0.5125 |
| No log | 8.0 | 320 | 1.3149 | 0.4688 |
| No log | 9.0 | 360 | 1.3041 | 0.475 |
| No log | 10.0 | 400 | 1.2425 | 0.55 |
| No log | 11.0 | 440 | 1.3743 | 0.4813 |
| No log | 12.0 | 480 | 1.3849 | 0.4688 |
| 1.0637 | 13.0 | 520 | 1.2804 | 0.5437 |
| 1.0637 | 14.0 | 560 | 1.3975 | 0.4875 |
| 1.0637 | 15.0 | 600 | 1.3569 | 0.525 |
| 1.0637 | 16.0 | 640 | 1.3928 | 0.5 |
| 1.0637 | 17.0 | 680 | 1.3665 | 0.5 |
| 1.0637 | 18.0 | 720 | 1.3320 | 0.5188 |
| 1.0637 | 19.0 | 760 | 1.3358 | 0.5 |
| 1.0637 | 20.0 | 800 | 1.3064 | 0.5312 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
vapogore/mbart-spa-miso
|
vapogore
| 2024-02-16T17:48:10Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-15T18:01:10Z |
---
license: mit
base_model: facebook/mbart-large-50
tags:
- simplification
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-spa-miso
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-spa-miso
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5140
- Bleu: 10.6012
- Gen Len: 21.5854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 16 | 3.7443 | 7.8006 | 18.3171 |
| No log | 2.0 | 32 | 3.5140 | 10.6012 | 21.5854 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
kavyapatchikora02/my-flowers-xzg
|
kavyapatchikora02
| 2024-02-16T17:43:53Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-16T17:39:48Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Flowers-XZG Dreambooth model trained by kavyapatchikora02 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21U41A0538
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
felixbrock/test_trainer
|
felixbrock
| 2024-02-16T17:39:56Z | 195 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-16T17:39:13Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0632
- Accuracy: 0.992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.0628 | 0.992 |
| No log | 2.0 | 250 | 0.0632 | 0.992 |
| No log | 3.0 | 375 | 0.0632 | 0.992 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
alitolga/electra-base-generator-QuestionAns
|
alitolga
| 2024-02-16T17:39:45Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:google/electra-base-generator",
"base_model:finetune:google/electra-base-generator",
"license:apache-2.0",
"region:us"
] | null | 2024-02-16T17:35:31Z |
---
license: apache-2.0
base_model: google/electra-base-generator
tags:
- generated_from_trainer
model-index:
- name: electra-base-generator-QuestionAns
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-generator-QuestionAns
This model is a fine-tuned version of [google/electra-base-generator](https://huggingface.co/google/electra-base-generator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.2043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 31 | 6.2511 |
| No log | 2.0 | 62 | 6.2165 |
| No log | 3.0 | 93 | 6.2043 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
romain22222/sd-pokemon-model-lora
|
romain22222
| 2024-02-16T17:38:55Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-16T11:52:04Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
base_model: CompVis/stable-diffusion-v1-4
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - romain22222/sd-pokemon-model-lora
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the tungdop2/pokemon dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
zzzzzioni/patent-word-recog-tokenizer
|
zzzzzioni
| 2024-02-16T17:38:28Z | 0 | 0 | null |
[
"sentence-similarity",
"ko",
"region:us"
] |
sentence-similarity
| 2024-02-16T17:22:14Z |
---
language:
- ko
pipeline_tag: sentence-similarity
---
|
maxfrax/distilbert-base-uncased-finetuned-emotion
|
maxfrax
| 2024-02-16T17:32:59Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-16T17:20:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9258243133918047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2134
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3212 | 0.906 | 0.9047 |
| No log | 2.0 | 500 | 0.2134 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
shahzebnaveed/NeuralHermes-2.5-Mistral-7B
|
shahzebnaveed
| 2024-02-16T17:32:36Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-15T14:52:31Z |
---
library_name: transformers
license: apache-2.0
---
# Model Card for NeuralHermes 2.5 - Mistral 7B
NeuralHermes is based on the teknium/OpenHermes-2.5-Mistral-7B model that has been further fine-tuned with Direct Preference Optimization (DPO) using the Intel/orca_dpo_pairs dataset, reformatted with the ChatML template.
It is directly inspired by the RLHF process described by Intel/neural-chat-7b-v3-1's authors to improve performance.
**IMPORTANT**
- This model was only run for 2 steps before GPU went out of memory. Hence, this is not completely fine-tuned with DPO.
- Secondly, to make it run over a small GPU, I purposefully reduced the parameters (# of LORA adapters, alpha, etc.). The values are therefore not the ideal.
## Uses
You can use the following code to use this model:
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
|
daswer123/sentence2danboorutags-qlora-11k
|
daswer123
| 2024-02-16T17:24:38Z | 0 | 0 | null |
[
"dataset:daswer123/sentence-danbooru-tags-dataset",
"license:mit",
"region:us"
] | null | 2024-02-16T17:17:05Z |
---
license: mit
datasets:
- daswer123/sentence-danbooru-tags-dataset
---
Qlora for openchat-3.5-1210 which allows you to convert a given model sentence into a set of danbooru tags
format: openChat
train format: Alpaca
This is the first time I have had this lora done and I am quite happy with the result. This qlora was trained on 4bit full model openchat-3.5-1210. with rank of 64 for 3 epochs
There were 11k instructions in the dataset that had examples of how to convert from sentences to tags.
I use a this prompt, I'm not sure if it's optimal but maybe you can do better
```
You must follow this rules:
- If a character is alone, add the "solo" tag.
- Describe physical attributes such as hair color or clothing (e.g., "pink hair," "gothic lolita fashion").
- Describe actions of characters if any (e.g., “playing violin,” “studying”).
- Mention setting details like location and background objects (e.g.,”music room”, “classical instruments”).
- Keep each tag separated by a comma.
- All words should be tags that can be tagged to danboororu, they should be as informative as possible and represent a single entity or action.
- From the verbs, select words that can denote place
- Always start with the number of characters and their gender (e.g., "1girl," "3boys").
- If a person is described by a noun, e.g. "Beautiful girl", then we should distinguish as follows: 1girl, beautiful.
- In output can be only girl(s) or boy(s) there's a human there, if the scenery can be left out.
- Answer only on English
You have to convert user input into danbooru tags
```
I use with the [openchat-3.5-1210-4.0bwp model](https://huggingface.co/LoneStriker/openchat-3.5-1210-4.0bpw-h6-exl2-2).
|
laureanadcastro/mbart-neutralization
|
laureanadcastro
| 2024-02-16T17:15:56Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-16T15:44:20Z |
---
license: mit
base_model: facebook/mbart-large-50
tags:
- simplification
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-neutralization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-neutralization
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0181
- Bleu: 98.7341
- Gen Len: 18.4896
## Model description
El modelo "laureanadcastro/mbart-neutralization" es un modelo de traducción desarrollado para llevar a cabo la neutralización de género en textos en español.
Utiliza la arquitectura seq2seq y está entrenado para traducir texto en español "normal" a español "inclusivo", donde se neutraliza el género gramatical para hacerlo más inclusivo.
## Intended uses & limitations
Este modelo es ideal para casos de uso donde se requiere la traducción de texto en español a una forma más inclusiva y neutral en cuanto al género gramatical.
Es adecuado para aplicaciones que buscan promover la diversidad y la inclusión en el lenguaje escrito.
Sin embargo, es importante tener en cuenta que, aunque el modelo ha sido entrenado con datos específicos para la neutralización de género, puede haber casos donde la neutralización no sea perfecta o no se ajuste al contexto deseado.
## Training and evaluation data
El modelo fue entrenado utilizando el conjunto de datos "hackathon-pln-es/neutral-es", también conocido como el "Spanish Gender Neutralization dataset", que contiene ejemplos de texto en español que han sido neutralizados en cuanto al género.
Este conjunto de datos fue tokenizado utilizando el modelo "mbart-large-50".
Durante el entrenamiento, se utilizó la métrica sacrebleu, ampliamente utilizada en el ámbito de la traducción automática, para evaluar el rendimiento del modelo.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 440 | 0.0307 | 91.2911 | 18.25 |
| 0.2343 | 2.0 | 880 | 0.0181 | 98.7341 | 18.4896 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
alitolga/deberta-v3-base-QuestionAns
|
alitolga
| 2024-02-16T17:04:33Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"region:us"
] | null | 2024-02-13T19:34:30Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base-QuestionAns
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-QuestionAns
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 30 | 5.2473 |
| No log | 2.0 | 60 | 5.1047 |
| No log | 3.0 | 90 | 5.0560 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
AlexanderHolmes0/Llama-2-7b-hf-sentiment-3
|
AlexanderHolmes0
| 2024-02-16T17:00:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-16T17:00:08Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: Llama-2-7b-hf-sentiment-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-sentiment-3
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 3.1564 | 0.2 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
laureanadcastro/mbart-neutralization-2
|
laureanadcastro
| 2024-02-16T16:53:38Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-16T16:48:24Z |
---
license: mit
base_model: facebook/mbart-large-50
tags:
- simplification
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-neutralization-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-neutralization-2
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6087
- Bleu: 19.7978
- Gen Len: 12.3415
## Model description
El modelo "glombardo/misogynistic-statements-and-their-potential-restructuring" es un modelo de traducción desarrollado para reestructurar frases misóginas en frases no misóginas.
Utiliza la arquitectura seq2seq y está entrenado para traducir texto que contiene expresiones misóginas a texto que presenta una formulación no misógina.
## Intended uses & limitations
Este modelo es útil para casos de uso donde se desea abordar y mitigar la misoginia en el lenguaje escrito.
Es adecuado para aplicaciones que buscan promover la equidad de género y combatir el discurso de odio.
Sin embargo, es importante tener en cuenta que el modelo puede no ser capaz de identificar y reestructurar todas las formas de expresión misógina, y su eficacia puede variar según el contexto y la complejidad del texto.
## Training and evaluation data
El modelo fue entrenado utilizando el conjunto de datos "glombardo/misogynistic-statements-and-their-potential-restructuring", que contiene ejemplos de frases misóginas y sus reescrituras en frases no misóginas.
Este conjunto de datos fue tokenizado utilizando el modelo "mbart-large-50".
Durante el entrenamiento, se utilizó la métrica sacrebleu para evaluar el rendimiento del modelo, una métrica comúnmente utilizada en la evaluación de sistemas de traducción automática.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:|
| No log | 1.0 | 16 | 4.0861 | 0.5502 | 131.1463 |
| No log | 2.0 | 32 | 1.6087 | 19.7978 | 12.3415 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
kar-saaragh/Reinforce-PixelCopter-V4
|
kar-saaragh
| 2024-02-16T16:43:02Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-16T16:29:45Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-V4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 49.80 +/- 26.53
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
llmware/slim-ner
|
llmware
| 2024-02-16T16:31:16Z | 147 | 16 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-19T09:50:05Z |
---
license: apache-2.0
inference: false
---
# SLIM-NER
<!-- Provide a quick summary of what the model is/does. -->
**slim-ner** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of 1b parameter small, specialized decoder-based models, fine-tuned for function-calling.
slim-ner has been fine-tuned for **named entity extraction** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
`{"people": ["..."], "organization":["..."], "location": ["..."]}`
SLIM models are designed to generate structured outputs that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.
SLIM models can be used 'out of the box' for rapid prototyping in most general purpose use cases, and are designed to serve as a solid base that can be easily fine-tuned and adapted for specialized production use cases.
Each slim model has a 'quantized tool' version, e.g., [**'slim-ner-tool'**](https://huggingface.co/llmware/slim-ner-tool).
## Prompt format:
`function = "classify"`
`params = "people, organization, location"`
`prompt = "<human> " + {text} + "\n" + `
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
<details>
<summary>Transformers Script </summary>
model = AutoModelForCausalLM.from_pretrained("llmware/slim-ner")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-ner")
function = "classify"
params = "people, organization, location"
text = "Yesterday, in Redmond, Satya Nadella announced that Microsoft would be launching a new AI strategy."
prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])
outputs = model.generate(
inputs.input_ids.to('cpu'),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100
)
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
print("output only: ", output_only)
# here's the fun part
try:
output_only = ast.literal_eval(llm_string_output)
print("success - converted to python dictionary automatically")
except:
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
</details>
<details>
<summary>Using as Function Call in LLMWare</summary>
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-ner")
response = slim_model.function_call(text,params=["people","organization","location"], function="classify")
print("llmware - llm_response: ", response)
</details>
## Model Card Contact
Darren Oberst & llmware team
[Join us on Discord](https://discord.gg/MhZn5Nc39h)
|
gabrielganan/vit-emotion_classification
|
gabrielganan
| 2024-02-16T16:23:32Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T11:28:04Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-emotion_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.59375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2112
- Accuracy: 0.5938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.8928 | 0.375 |
| No log | 2.0 | 80 | 1.5709 | 0.375 |
| No log | 3.0 | 120 | 1.4385 | 0.4938 |
| No log | 4.0 | 160 | 1.3183 | 0.5437 |
| No log | 5.0 | 200 | 1.2514 | 0.5813 |
| No log | 6.0 | 240 | 1.2412 | 0.5563 |
| No log | 7.0 | 280 | 1.2048 | 0.5875 |
| No log | 8.0 | 320 | 1.1530 | 0.6188 |
| No log | 9.0 | 360 | 1.1870 | 0.55 |
| No log | 10.0 | 400 | 1.2160 | 0.5563 |
| No log | 11.0 | 440 | 1.1182 | 0.5563 |
| No log | 12.0 | 480 | 1.1162 | 0.5938 |
| 1.0857 | 13.0 | 520 | 1.0960 | 0.6312 |
| 1.0857 | 14.0 | 560 | 1.1724 | 0.55 |
| 1.0857 | 15.0 | 600 | 1.1100 | 0.625 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
ashishkgpian/mixtral4bit
|
ashishkgpian
| 2024-02-16T16:20:13Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-16T16:13:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RickMartel/GPT2_FT_By_NT_RAND_v11
|
RickMartel
| 2024-02-16T16:13:34Z | 141 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T15:23:21Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: GPT2_FT_By_NT_RAND_v11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2_FT_By_NT_RAND_v11
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
chaouch/distilhubert-finetuned-gtzan
|
chaouch
| 2024-02-16T16:11:33Z | 148 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:chaouch/distilhubert-finetuned-gtzan",
"base_model:finetune:chaouch/distilhubert-finetuned-gtzan",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-02-15T14:44:09Z |
---
license: apache-2.0
base_model: chaouch/distilhubert-finetuned-gtzan
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.9666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-finetuned-gtzan
This model is a fine-tuned version of [chaouch/distilhubert-finetuned-gtzan](https://huggingface.co/chaouch/distilhubert-finetuned-gtzan) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1733
- Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.026 | 1.0 | 135 | 0.2289 | 0.9444 |
| 0.1351 | 2.0 | 270 | 0.1379 | 0.9778 |
| 0.01 | 3.0 | 405 | 0.2310 | 0.9667 |
| 0.0053 | 4.0 | 540 | 0.1727 | 0.9667 |
| 0.0002 | 5.0 | 675 | 0.1703 | 0.9667 |
| 0.0002 | 6.0 | 810 | 0.1722 | 0.9667 |
| 0.0002 | 7.0 | 945 | 0.1733 | 0.9667 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
rparasa/segformer_400train80val_subset_10epochs
|
rparasa
| 2024-02-16T16:08:53Z | 16 | 0 |
transformers
|
[
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-02-16T16:08:51Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_keras_callback
model-index:
- name: segformer_400train80val_subset_10epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# segformer_400train80val_subset_10epochs
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0838
- Validation Loss: 0.0902
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 6e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3830 | 0.2178 | 0 |
| 0.1680 | 0.2520 | 1 |
| 0.1641 | 0.1378 | 2 |
| 0.1141 | 0.1192 | 3 |
| 0.1130 | 0.0994 | 4 |
| 0.0945 | 0.0984 | 5 |
| 0.0863 | 0.0950 | 6 |
| 0.0874 | 0.0905 | 7 |
| 0.0858 | 0.0918 | 8 |
| 0.0838 | 0.0902 | 9 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Tokenizers 0.15.2
|
fullstuck/image_classification
|
fullstuck
| 2024-02-16T16:08:15Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-17T09:52:32Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.55625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5284
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.4223 | 0.525 |
| No log | 2.0 | 80 | 1.5923 | 0.4938 |
| No log | 3.0 | 120 | 1.4860 | 0.5563 |
| No log | 4.0 | 160 | 1.4983 | 0.5625 |
| No log | 5.0 | 200 | 1.5151 | 0.5938 |
| No log | 6.0 | 240 | 1.6818 | 0.5062 |
| No log | 7.0 | 280 | 1.6757 | 0.5125 |
| No log | 8.0 | 320 | 1.4647 | 0.5875 |
| No log | 9.0 | 360 | 1.4922 | 0.5875 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
MonkeyDdonut/dqn-SpaceInvadersNoFrameskip-v4
|
MonkeyDdonut
| 2024-02-16T16:05:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-16T16:04:57Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 29.00 +/- 64.30
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MonkeyDdonut -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MonkeyDdonut -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MonkeyDdonut
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Ripesh08/news_summarization
|
Ripesh08
| 2024-02-16T16:02:49Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-13T12:08:51Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: news_summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# news_summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Rouge1: 0.9698
- Rouge2: 0.9659
- Rougel: 0.9698
- Rougelsum: 0.9699
- Gen Len: 16.9568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 220 | 0.0024 | 0.9688 | 0.9647 | 0.9688 | 0.9688 | 16.9511 |
| No log | 2.0 | 440 | 0.0014 | 0.9694 | 0.9653 | 0.9694 | 0.9695 | 16.9591 |
| 0.114 | 3.0 | 660 | 0.0010 | 0.9698 | 0.9659 | 0.9698 | 0.9699 | 16.9568 |
| 0.114 | 4.0 | 880 | 0.0010 | 0.9698 | 0.9659 | 0.9698 | 0.9699 | 16.9568 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
llmixer/Midnight-Rose-103B-v2.0.3-3.5bpw-h6-exl2
|
llmixer
| 2024-02-16T15:50:15Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"3.5bpw",
"h6",
"exl2",
"conversational",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T10:51:26Z |
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- 3.5bpw
- h6
- exl2
---
Exllamav2 3.5bpw h6 quant for [Midnight-Rose-103B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-103B-v2.0.3).
Default calibration dataset.
|
llmixer/Midnight-Rose-103B-v2.0.3-4.0bpw-h6-exl2
|
llmixer
| 2024-02-16T15:50:02Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"4.0bpw",
"h6",
"exl2",
"conversational",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T10:36:43Z |
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- 4.0bpw
- h6
- exl2
---
Exllamav2 4.0bpw h6 quant for [Midnight-Rose-103B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-103B-v2.0.3).
Default calibration dataset.
|
llmixer/Midnight-Rose-103B-v2.0.3-5.0bpw-h6-exl2
|
llmixer
| 2024-02-16T15:49:49Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"5.0bpw",
"h6",
"exl2",
"conversational",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T10:41:56Z |
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- 5.0bpw
- h6
- exl2
---
Exllamav2 5.0bpw h6 quant for [Midnight-Rose-103B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-103B-v2.0.3).
Default calibration dataset.
|
ChaitanyaPavani/life
|
ChaitanyaPavani
| 2024-02-16T15:46:34Z | 2 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-16T15:42:36Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Life Dreambooth model trained by ChaitanyaPavani following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21U41A0535
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
lulygavri/t-pol-ActiveLearning
|
lulygavri
| 2024-02-16T15:45:41Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:PlanTL-GOB-ES/roberta-base-bne",
"base_model:finetune:PlanTL-GOB-ES/roberta-base-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-16T15:44:34Z |
---
license: apache-2.0
base_model: PlanTL-GOB-ES/roberta-base-bne
tags:
- generated_from_keras_callback
model-index:
- name: lulygavri/t-pol-ActiveLearning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lulygavri/t-pol-ActiveLearning
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0921
- Validation Loss: 1.0869
- Train Accuracy: 0.2976
- Train Precision: [0. 0. 0.30120482]
- Train Precision W: 0.0896
- Train Recall: [0. 0. 1.]
- Train Recall W: 0.2976
- Train F1: [0. 0. 0.46296296]
- Train F1 W: 0.1378
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -428, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 500, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
|:----------:|:---------------:|:--------------:|:----------------------------------:|:-----------------:|:------------:|:--------------:|:----------------------------------:|:----------:|:-----:|
| 1.0921 | 1.0869 | 0.2976 | [0. 0. 0.30120482] | 0.0896 | [0. 0. 1.] | 0.2976 | [0. 0. 0.46296296] | 0.1378 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
bartowski/LWM-Text-Chat-1M-exl2
|
bartowski
| 2024-02-16T15:37:46Z | 0 | 0 | null |
[
"text-generation",
"region:us"
] |
text-generation
| 2024-02-16T15:23:14Z |
---
inference: false
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of LWM-Text-Chat-1M
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/LargeWorldModel/LWM-Text-Chat-1M
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
| [8_0](https://huggingface.co/bartowski/LWM-Text-Chat-1M-exl2/tree/8_0) | 8.0 | 8.0 | 9.4 GB | 15.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/LWM-Text-Chat-1M-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | 14.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/LWM-Text-Chat-1M-exl2/tree/5_0) | 5.0 | 6.0 | 7.2 GB | 13.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
| [4_25](https://huggingface.co/bartowski/LWM-Text-Chat-1M-exl2/tree/4_25) | 4.25 | 6.0 | 6.5 GB | 12.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/bartowski/LWM-Text-Chat-1M-exl2/tree/3_5) | 3.5 | 6.0 | 5.9 GB | 12.1 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/LWM-Text-Chat-1M-exl2 LWM-Text-Chat-1M-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `LWM-Text-Chat-1M-exl2`:
```shell
mkdir LWM-Text-Chat-1M-exl2
huggingface-cli download bartowski/LWM-Text-Chat-1M-exl2 --local-dir LWM-Text-Chat-1M-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir LWM-Text-Chat-1M-exl2-6_5
huggingface-cli download bartowski/LWM-Text-Chat-1M-exl2 --revision 6_5 --local-dir LWM-Text-Chat-1M-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir LWM-Text-Chat-1M-exl2-6.5
huggingface-cli download bartowski/LWM-Text-Chat-1M-exl2 --revision 6_5 --local-dir LWM-Text-Chat-1M-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
gungbgs/image_classification
|
gungbgs
| 2024-02-16T15:31:22Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T08:18:24Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.45
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5137
- Accuracy: 0.45
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.5946 | 0.4062 |
| No log | 2.0 | 80 | 1.5868 | 0.4062 |
| No log | 3.0 | 120 | 1.5588 | 0.425 |
| No log | 4.0 | 160 | 1.5516 | 0.425 |
| No log | 5.0 | 200 | 1.5479 | 0.4313 |
| No log | 6.0 | 240 | 1.5150 | 0.4813 |
| No log | 7.0 | 280 | 1.5037 | 0.4625 |
| No log | 8.0 | 320 | 1.5131 | 0.475 |
| No log | 9.0 | 360 | 1.5091 | 0.425 |
| No log | 10.0 | 400 | 1.5117 | 0.4125 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
kazuma313/emotion_classification
|
kazuma313
| 2024-02-16T15:31:22Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-05T23:01:18Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.56875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1901
- Accuracy: 0.5687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.9937 | 0.225 |
| No log | 2.0 | 40 | 1.7466 | 0.4188 |
| No log | 3.0 | 60 | 1.5370 | 0.5375 |
| No log | 4.0 | 80 | 1.4797 | 0.5125 |
| No log | 5.0 | 100 | 1.3531 | 0.55 |
| No log | 6.0 | 120 | 1.3115 | 0.5687 |
| No log | 7.0 | 140 | 1.2982 | 0.5375 |
| No log | 8.0 | 160 | 1.2543 | 0.5437 |
| No log | 9.0 | 180 | 1.2666 | 0.525 |
| No log | 10.0 | 200 | 1.2427 | 0.5312 |
| No log | 11.0 | 220 | 1.2100 | 0.5687 |
| No log | 12.0 | 240 | 1.2494 | 0.5375 |
| No log | 13.0 | 260 | 1.2266 | 0.5625 |
| No log | 14.0 | 280 | 1.2360 | 0.5437 |
| No log | 15.0 | 300 | 1.1901 | 0.5687 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
UNAVS/image_classification
|
UNAVS
| 2024-02-16T15:25:06Z | 183 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T06:06:30Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7071
- Accuracy: 0.4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 1.8097 | 0.3438 |
| No log | 2.0 | 20 | 1.7289 | 0.3875 |
| No log | 3.0 | 30 | 1.7099 | 0.4 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.17.0
- Tokenizers 0.15.0
|
c-demartino/llama2-7b-chat-p-q4
|
c-demartino
| 2024-02-16T15:22:42Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-15T16:21:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
brugmark/all-MiniLM-L6-v2-personal-project-default-2024-02-16
|
brugmark
| 2024-02-16T15:18:07Z | 113 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-02-16T12:33:37Z |
---
license: apache-2.0
base_model: sentence-transformers/all-MiniLM-L6-v2
tags:
- generated_from_trainer
model-index:
- name: all-MiniLM-L6-v2-personal-project-default-2024-02-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-MiniLM-L6-v2-personal-project-default-2024-02-16
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 11.1233
- eval_runtime: 6.7277
- eval_samples_per_second: 6.986
- eval_steps_per_second: 0.297
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
aldidwiputra9/emotion_classification
|
aldidwiputra9
| 2024-02-16T15:16:21Z | 178 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T15:00:22Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5853
- Accuracy: 0.4625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.9252 | 0.2938 |
| No log | 2.0 | 40 | 1.7439 | 0.4562 |
| No log | 3.0 | 60 | 1.6389 | 0.425 |
| No log | 4.0 | 80 | 1.5862 | 0.475 |
| No log | 5.0 | 100 | 1.5477 | 0.4188 |
| No log | 6.0 | 120 | 1.5441 | 0.4437 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cpu
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Manojb/llama-finetune
|
Manojb
| 2024-02-16T15:13:42Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-02-16T15:06:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jtatman/tinymistral-mediqa-248m
|
jtatman
| 2024-02-16T15:10:00Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"medical",
"dataset:jtatman/medquad-medicalqa-wizdolalpaca-instruct",
"dataset:jtatman/medical_biological_instruction_format",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-13T09:29:56Z |
---
license: apache-2.0
library_name: transformers
tags:
- medical
datasets:
- jtatman/medquad-medicalqa-wizdolalpaca-instruct
- jtatman/medical_biological_instruction_format
---
This is an ongoing experiment in training and retraining boundaries.
The model is currently overtrained and is purposely so to investigate the paths out of overtraining.
This is purely an experiment on depths and depravity of repetitive training. Don't bother messing around with it much.
|
jetaimejeteveux/vit-emotions-fp16
|
jetaimejeteveux
| 2024-02-16T15:08:47Z | 29 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T07:09:53Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-emotions-fp16
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9859375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-emotions-fp16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0725
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.3965 | 0.4938 |
| No log | 2.0 | 80 | 1.4154 | 0.425 |
| No log | 3.0 | 120 | 1.3729 | 0.4562 |
| No log | 4.0 | 160 | 1.3532 | 0.4562 |
| No log | 5.0 | 200 | 1.2993 | 0.5062 |
| No log | 6.0 | 240 | 1.3438 | 0.4938 |
| No log | 7.0 | 280 | 1.3741 | 0.5 |
| No log | 8.0 | 320 | 1.5267 | 0.4313 |
| No log | 9.0 | 360 | 1.2778 | 0.5375 |
| No log | 10.0 | 400 | 1.3864 | 0.5062 |
| No log | 11.0 | 440 | 1.4221 | 0.4875 |
| No log | 12.0 | 480 | 1.5059 | 0.5062 |
| 0.7596 | 13.0 | 520 | 1.5004 | 0.5188 |
| 0.7596 | 14.0 | 560 | 1.4539 | 0.5125 |
| 0.7596 | 15.0 | 600 | 1.5219 | 0.5375 |
| 0.7596 | 16.0 | 640 | 1.6179 | 0.4813 |
| 0.7596 | 17.0 | 680 | 1.4562 | 0.55 |
| 0.7596 | 18.0 | 720 | 1.5473 | 0.4875 |
| 0.7596 | 19.0 | 760 | 1.5820 | 0.5188 |
| 0.7596 | 20.0 | 800 | 1.5877 | 0.5125 |
| 0.7596 | 21.0 | 840 | 1.4965 | 0.55 |
| 0.7596 | 22.0 | 880 | 1.5947 | 0.5375 |
| 0.7596 | 23.0 | 920 | 1.4672 | 0.5437 |
| 0.7596 | 24.0 | 960 | 1.7930 | 0.5 |
| 0.2328 | 25.0 | 1000 | 1.8033 | 0.4875 |
| 0.2328 | 26.0 | 1040 | 1.7193 | 0.5312 |
| 0.2328 | 27.0 | 1080 | 1.8072 | 0.4813 |
| 0.2328 | 28.0 | 1120 | 1.6767 | 0.5437 |
| 0.2328 | 29.0 | 1160 | 1.6138 | 0.5625 |
| 0.2328 | 30.0 | 1200 | 1.8484 | 0.4938 |
| 0.2328 | 31.0 | 1240 | 1.7691 | 0.5062 |
| 0.2328 | 32.0 | 1280 | 1.7797 | 0.5062 |
| 0.2328 | 33.0 | 1320 | 1.7575 | 0.5375 |
| 0.2328 | 34.0 | 1360 | 1.7550 | 0.5062 |
| 0.2328 | 35.0 | 1400 | 1.7933 | 0.5 |
| 0.2328 | 36.0 | 1440 | 1.7056 | 0.5563 |
| 0.2328 | 37.0 | 1480 | 1.8739 | 0.4938 |
| 0.1517 | 38.0 | 1520 | 1.7637 | 0.5188 |
| 0.1517 | 39.0 | 1560 | 1.7178 | 0.5563 |
| 0.1517 | 40.0 | 1600 | 1.9114 | 0.5 |
| 0.1517 | 41.0 | 1640 | 1.8453 | 0.5188 |
| 0.1517 | 42.0 | 1680 | 1.7571 | 0.5625 |
| 0.1517 | 43.0 | 1720 | 1.7757 | 0.5437 |
| 0.1517 | 44.0 | 1760 | 1.8389 | 0.5125 |
| 0.1517 | 45.0 | 1800 | 1.8109 | 0.5375 |
| 0.1517 | 46.0 | 1840 | 1.8537 | 0.4688 |
| 0.1517 | 47.0 | 1880 | 1.7422 | 0.5563 |
| 0.1517 | 48.0 | 1920 | 1.7807 | 0.5687 |
| 0.1517 | 49.0 | 1960 | 1.8111 | 0.525 |
| 0.1045 | 50.0 | 2000 | 1.9057 | 0.5125 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
kshantam9/lora-flan-t5-xxl
|
kshantam9
| 2024-02-16T15:08:24Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"license:apache-2.0",
"region:us"
] | null | 2024-02-16T15:08:22Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: google/flan-t5-large
model-index:
- name: lora-flan-t5-xxl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-flan-t5-xxl
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.17.0
- Tokenizers 0.15.2
|
ishanarang/my_awesome_opus_books_model
|
ishanarang
| 2024-02-16T15:07:36Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-16T15:07:07Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6001
- Bleu: 5.7668
- Gen Len: 17.5492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8543 | 1.0 | 6355 | 1.6245 | 5.5907 | 17.5634 |
| 1.8391 | 2.0 | 12710 | 1.6001 | 5.7668 | 17.5492 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Subhaaannn/image_classification
|
Subhaaannn
| 2024-02-16T15:03:59Z | 177 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T12:13:33Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4622
- Accuracy: 0.575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.5769 | 0.4625 |
| No log | 2.0 | 80 | 1.5299 | 0.525 |
| No log | 3.0 | 120 | 1.4961 | 0.55 |
| No log | 4.0 | 160 | 1.5013 | 0.5188 |
| No log | 5.0 | 200 | 1.4440 | 0.55 |
| No log | 6.0 | 240 | 1.4333 | 0.5687 |
| No log | 7.0 | 280 | 1.4314 | 0.5437 |
| No log | 8.0 | 320 | 1.4307 | 0.5437 |
| No log | 9.0 | 360 | 1.4264 | 0.5125 |
| No log | 10.0 | 400 | 1.4369 | 0.525 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
hfayuwardana/image_classification
|
hfayuwardana
| 2024-02-16T14:59:13Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-12T18:25:24Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.55
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2333
- Accuracy: 0.55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.3397 | 0.4938 |
| No log | 2.0 | 80 | 1.3036 | 0.5312 |
| No log | 3.0 | 120 | 1.3684 | 0.5125 |
| No log | 4.0 | 160 | 1.3877 | 0.5 |
| No log | 5.0 | 200 | 1.2441 | 0.5625 |
| No log | 6.0 | 240 | 1.3767 | 0.5 |
| No log | 7.0 | 280 | 1.2784 | 0.5437 |
| No log | 8.0 | 320 | 1.3191 | 0.5188 |
| No log | 9.0 | 360 | 1.3417 | 0.5062 |
| No log | 10.0 | 400 | 1.3411 | 0.5125 |
| No log | 11.0 | 440 | 1.3460 | 0.5062 |
| No log | 12.0 | 480 | 1.4155 | 0.5 |
| 0.483 | 13.0 | 520 | 1.2887 | 0.5375 |
| 0.483 | 14.0 | 560 | 1.3648 | 0.5 |
| 0.483 | 15.0 | 600 | 1.3337 | 0.5 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
ElderlyDed/MistPhotoCheck5K
|
ElderlyDed
| 2024-02-16T14:59:10Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T14:54:14Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
DragosGorduza/FRPile_GPL_test_pipeline_all-mpnet-base-v2-MISTRAL-notrescaled_20000
|
DragosGorduza
| 2024-02-16T14:58:06Z | 53 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-02-16T14:57:16Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 21625 with parameters:
```
{'batch_size': 48, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 30000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
aljaziz/Reinforce-Pixelcopter-PLE-v0
|
aljaziz
| 2024-02-16T14:58:03Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-16T14:30:06Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 22.40 +/- 19.01
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Eyuel/rl_course_moonlander
|
Eyuel
| 2024-02-16T14:55:50Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-16T14:55:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.04 +/- 13.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bastistrauss/t5-small-finetuned-DEPlain
|
bastistrauss
| 2024-02-16T14:52:37Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-16T14:52:03Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-DEPlain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-DEPlain
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4349
- Rouge1: 55.9974
- Rouge2: 33.5645
- Rougel: 49.3408
- Rougelsum: 50.3503
- Gen Len: 16.7644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8141 | 1.0 | 667 | 1.5924 | 55.8422 | 33.3789 | 49.0964 | 50.0345 | 16.7644 |
| 1.7476 | 2.0 | 1334 | 1.5489 | 55.8013 | 33.356 | 48.9789 | 49.9383 | 16.8058 |
| 1.6973 | 3.0 | 2001 | 1.5193 | 55.7584 | 33.2723 | 48.9591 | 49.8935 | 16.7725 |
| 1.6513 | 4.0 | 2668 | 1.4988 | 55.9388 | 33.5848 | 49.2591 | 50.1911 | 16.7823 |
| 1.6271 | 5.0 | 3335 | 1.4846 | 55.8441 | 33.4064 | 49.2314 | 50.2123 | 16.7994 |
| 1.6048 | 6.0 | 4002 | 1.4735 | 55.9061 | 33.4165 | 49.207 | 50.1571 | 16.8107 |
| 1.5856 | 7.0 | 4669 | 1.4647 | 55.9145 | 33.4539 | 49.2251 | 50.1857 | 16.7953 |
| 1.5711 | 8.0 | 5336 | 1.4548 | 55.9216 | 33.4538 | 49.2822 | 50.2536 | 16.7628 |
| 1.5586 | 9.0 | 6003 | 1.4504 | 55.9937 | 33.5651 | 49.2948 | 50.2935 | 16.7807 |
| 1.548 | 10.0 | 6670 | 1.4442 | 55.9368 | 33.5696 | 49.2953 | 50.292 | 16.7506 |
| 1.5394 | 11.0 | 7337 | 1.4409 | 56.0439 | 33.6125 | 49.3406 | 50.3633 | 16.7628 |
| 1.5358 | 12.0 | 8004 | 1.4380 | 56.0279 | 33.6056 | 49.3376 | 50.3537 | 16.7579 |
| 1.5252 | 13.0 | 8671 | 1.4357 | 55.9468 | 33.4637 | 49.2525 | 50.2542 | 16.7571 |
| 1.5225 | 14.0 | 9338 | 1.4353 | 55.9919 | 33.5532 | 49.3214 | 50.3302 | 16.766 |
| 1.523 | 15.0 | 10005 | 1.4349 | 55.9974 | 33.5645 | 49.3408 | 50.3503 | 16.7644 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
ThomasEgense/andreas_model15_again
|
ThomasEgense
| 2024-02-16T14:51:13Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-16T12:44:03Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of andreasegense person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - ThomasEgense/andreas_model15_again
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of andreasegense person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
ghozyulhaq/image_classification
|
ghozyulhaq
| 2024-02-16T14:47:42Z | 177 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T13:29:01Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: image_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.6689
- eval_accuracy: 0.375
- eval_runtime: 118.7672
- eval_samples_per_second: 1.347
- eval_steps_per_second: 0.084
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
nrshoudi/Whisper-tiny-Jibbali_lang
|
nrshoudi
| 2024-02-16T14:46:40Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"region:us"
] | null | 2024-02-16T14:46:39Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
model-index:
- name: Whisper-tiny-Jibbali_lang
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-tiny-Jibbali_lang
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0229 | 1.0 | 300 | 0.0340 |
| 0.0152 | 2.0 | 600 | 0.0215 |
| 0.0052 | 3.0 | 900 | 0.0157 |
| 0.0103 | 4.0 | 1200 | 0.0144 |
| 0.0036 | 5.0 | 1500 | 0.0130 |
| 0.0035 | 6.0 | 1800 | 0.0132 |
| 0.0017 | 7.0 | 2100 | 0.0112 |
| 0.0009 | 8.0 | 2400 | 0.0126 |
| 0.0029 | 9.0 | 2700 | 0.0119 |
| 0.0016 | 10.0 | 3000 | 0.0122 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
AnranZZ/ppo-LunarLander-v2
|
AnranZZ
| 2024-02-16T14:37:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-16T14:37:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.72 +/- 24.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JediSim/mistral-lyrics-finetune
|
JediSim
| 2024-02-16T14:36:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-16T11:44:25Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-lyrics-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-lyrics-finetune
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8901 | 1.0 | 25 | 2.0092 |
| 1.5447 | 2.0 | 50 | 2.0093 |
| 1.2415 | 3.0 | 75 | 2.1479 |
| 0.8255 | 4.0 | 100 | 2.3764 |
| 0.4885 | 5.0 | 125 | 2.5386 |
| 0.2533 | 6.0 | 150 | 2.7441 |
| 0.1551 | 7.0 | 175 | 2.8726 |
| 0.0971 | 8.0 | 200 | 3.0833 |
| 0.0668 | 9.0 | 225 | 3.2275 |
| 0.047 | 10.0 | 250 | 3.3403 |
| 0.0353 | 11.0 | 275 | 3.3712 |
| 0.0332 | 12.0 | 300 | 3.4045 |
| 0.0256 | 13.0 | 325 | 3.5834 |
| 0.0231 | 14.0 | 350 | 3.5888 |
| 0.0204 | 15.0 | 375 | 3.6828 |
| 0.021 | 16.0 | 400 | 3.7834 |
| 0.0165 | 17.0 | 425 | 3.8463 |
| 0.0162 | 18.0 | 450 | 3.8980 |
| 0.0153 | 19.0 | 475 | 3.9315 |
| 0.0144 | 20.0 | 500 | 3.9357 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
hythyt/Taxi-v3
|
hythyt
| 2024-02-16T14:33:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-16T14:33:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="hythyt/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Tann-dev/sex-chat-zizi
|
Tann-dev
| 2024-02-16T14:19:05Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-16T14:19:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mathieu1256/layoutlmv3-test
|
mathieu1256
| 2024-02-16T14:17:27Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-16T14:17:00Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- cord
model-index:
- name: layoutlmv3-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-test
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
omniquad/ent_val_personal_16bit
|
omniquad
| 2024-02-16T14:12:32Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T14:11:04Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** omniquad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
omniquad/ent_val_personal_lora_model
|
omniquad
| 2024-02-16T14:11:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-16T14:10:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aidonuts/forthright-smooch-141-s4000
|
aidonuts
| 2024-02-16T14:09:48Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T14:08:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aljaziz/Reinforce-CartPole-v1
|
aljaziz
| 2024-02-16T14:05:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-16T14:05:47Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 173.30 +/- 47.72
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mathreader/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
mathreader
| 2024-02-16T14:01:16Z | 149 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-02-16T13:30:38Z |
---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3030
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9203 | 1.0 | 112 | 0.6901 | 0.82 |
| 0.2906 | 1.99 | 224 | 0.4069 | 0.86 |
| 0.3416 | 3.0 | 337 | 0.3651 | 0.84 |
| 0.2143 | 4.0 | 449 | 0.3208 | 0.89 |
| 0.0052 | 4.99 | 561 | 0.3180 | 0.88 |
| 0.0037 | 6.0 | 674 | 0.3233 | 0.88 |
| 0.0011 | 6.99 | 786 | 0.2975 | 0.9 |
| 0.0011 | 8.0 | 899 | 0.3200 | 0.88 |
| 0.0325 | 9.0 | 1011 | 0.3028 | 0.89 |
| 0.0008 | 9.97 | 1120 | 0.3030 | 0.89 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
xwvzr/image_classification
|
xwvzr
| 2024-02-16T13:59:44Z | 177 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T07:06:23Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.3875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7477
- Accuracy: 0.3875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.9458 | 0.3625 |
| No log | 2.0 | 80 | 1.7437 | 0.4188 |
| No log | 3.0 | 120 | 1.6751 | 0.4 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.