modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-03 18:30:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
537 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-03 18:30:19
card
stringlengths
11
1.01M
chillies/mistral-7b-vn-vi-alpaca
chillies
2024-05-01T18:08:15Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/OpenHermes-2.5-Mistral-7B-bnb-4bit", "base_model:finetune:unsloth/OpenHermes-2.5-Mistral-7B-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T18:08:10Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/OpenHermes-2.5-Mistral-7B-bnb-4bit --- # Uploaded model - **Developed by:** chillies - **License:** apache-2.0 - **Finetuned from model :** unsloth/OpenHermes-2.5-Mistral-7B-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/kalytm_-_nous-3-4bits
RichardErkhov
2024-05-01T18:08:11Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T18:06:11Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-3 - bnb 4bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-3/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HadjYahia/lora_model
HadjYahia
2024-05-01T18:08:03Z
0
1
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:finetune:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T18:07:37Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** HadjYahia - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Ankit15nov/setfit-ethos-multilabel-example
Ankit15nov
2024-05-01T18:07:31Z
5
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2024-05-01T17:26:07Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: sentence-transformers/paraphrase-mpnet-base-v2 metrics: - accuracy widget: - text: ' CercleFinance.com KKR and The Global Atlantic Financial have announced the finalization of their transaction. KKR completed the acquisition of the remaining 37 of Global Atlantic bringing its stake to 100 . ''Since day one Global Atlantic has been a great fit for KKR both from a business and cultural perspective. We look forward to collaborating even more closely with Global Atlantic so that we can realize more of the synergies we have discovered over the first three years of our strategic partnership '' said Joseph Bae and Scott Nuttall Co. Chief Executive Officers of KKR. ''KKR and Global Atlantic are a powerful combination. Our shared culture and commitment to excellence continue to enhance our ability to think and invest long term and deliver compelling solutions to our customers and policyholders '' said Allan Levine Co Founder Chairman and CEO from Global Atlantic. Global Atlantic will continue to be led by its management team and operate under the Global Atlantic brand. Copyright 2024 CercleFinance.com. All rights reserved. Did you like this article ? Share it with your friends using the buttons below. Source link 85 ' - text: 'As we move toward a society where anyone can buy Bitcoin major financial institutions around the world are working to develop next generation business infrastructure with an eye toward a society in which digital financial assets tokenized on the blockchain are exchanged. proceeding. Nomura Holdings is one such company. CoinDesk a major Web3 media company had started its own project before launching its Japanese version in 2019. Over the past five years when the crypto asset virtual currency market has experienced a boom and bust period Nomura has been building a next generation business foundation within the company that will lead to the current digital company. Hajime Ikeda is the key person leading this digital company. In 2023 the market was so cold that it was called Crypto Winter but the world''s attention was focused on RWA tokenization initiatives being promoted by major financial institutions in Europe the United States Japan and Singapore. RWA is an abbreviation that stands for Real World Asset and unlike crypto assets such as Bitcoin it refers to financial assets that actually exist. In other words if assets such as real estate bonds fiat currency gold and renewable energy are tokenized on the chain the market size will expand to trillions or even tens of trillions of dollars. Boston Consulting estimates that the RWA market could grow to as much as 16 trillion by 2030. How will the infrastructure human resources and tools that Nomura''s digital company has built up function in this new expanding market? We spoke to Mr. Ikeda about the near future while looking at the current status of major projects undertaken by Digital Company. Komainu Mr. Ashley behind the birth of the project The service that manages financial assets held by institutional investors is called custody and this is the custody business that Nomura first embarked on when building a business foundation for digital assets. In 2018 Nomura launched the project Komainu in collaboration with European companies Ledger and CoinShares. Custody is an essential service for the functioning of current financial markets. For example the Bank of New York now Bank of New York Mellon founded in the 1700s by Alexander Hamilton the author of the U.S. Constitution is the oldest bank in the United States and has been providing custody services for the securities of institutional investors for over 200 years. If a bank were to suddenly stop its operations the U.S. financial market the world''s largest would be paralyzed. How should Nomura respond to digital assets? After many heated discussions within the company Nomura decided to co found Komainu. The person who had a major influence on his decision was Stephen Ashley who was head of wholesale at the time. Hajime Ikeda Executive Officer Digital Company and Marketing Sales Division Nomura Holdings At that time there were various opinions within the company regarding digital assets. Many events and problems including hacking occurred in society relating to digital assets . However the market for digital assets did not grow. Custody will definitely be necessary Mr. Ikeda recalled at the time. We started from here and have continued to build on digital assets. In North America JPMorgan Chase the largest US bank was recently conducting a pilot project to use Quorum an independently developed blockchain based on Ethereum to perform payments and remittances in tokenized fiat currency. JP Morgan sold Quorum to ConsenSys the developer of the wallet Metamask in 2020. At the same time it acquired a portion of ConsenSys stock. Currently JP Morgan through its subsidiary Onyx is developing a project for corporations that utilizes tokens. In Japan in 2018 an incident occurred in which crypto assets worth 58 billion yen were hacked and stolen from the crypto asset exchange company Coincheck. A report by the US research firm Chainalysis later revealed that the North Korean hacking organization Lazarus was involved in the incident. Laser Digital The world''s giant asset management company on the rise In 2021 when it was reported that major US investment bank Goldman Sachs was planning to open a crypto asset trading desk Nomura was preparing to launch Laser Digital. The following year Laser established its headquarters in Zurich Switzerland. The three pillars of its business are digital asset management services and trading for institutional investors and startup investment. Mr. Ashley who has been in charge of the wholesale division has been appointed as Laser''s chairman. This shows how serious Nomura is about this business. To date as part of its asset management services it has been operating funds based on Bitcoin and Ethereum but for custody they have immediately started using Comainu which they have been developing. Laser will launch a Japanese subsidiary in October 2023 with Hideaki Kudo who has been leading digital asset business development together with Ikeda as its head. Mr. Kudo joined Nomura Asset Management after researching astrophysics and has led the development of financial products. Laser Digital Japan Representative Director and President Hideaki Kudo After the start of Komaine there was a certain level of understanding within the company about this initiative laser says Ikeda. When starting a trading business 24 hour monitoring is necessary. By supporting Asian time through Laser Digital Japan and starting a 24 hour system there is a world that will become visible. Should we do something? What can we do? It will become clear. U.S. asset management companies that manage huge amounts of funds are also becoming more active in developing financial services that handle crypto assets and tokenized RWA. One such company is BlackRock which manages over 9 trillion in assets and has been an early strategic investor in the tokenized RWA space. A typical example is the investment in the American company Circle which issues the stablecoin USD Coin USDC which is linked to the US dollar. In Circle''s 400 million funding round in 2022 Fidelity a major asset management company like BlackRock participated. At that time BlackRock and Circle entered into a strategic partnership. Additionally BlackRock has announced plans to list a bitcoin spot exchange traded fund ETF in 2023 and has filed an application for listing with the U.S. Securities and Exchange Commission SEC . If the world''s largest asset management company starts operating a Bitcoin ETF a huge amount of money will flow into the crypto asset market. Laser''s workforce is expected to increase to around 100 people worldwide by 2024. Mr. Ikeda emphasizes that by utilizing technology such as blockchain we can realize the customer journey of digital financial products in areas that cannot be covered by traditional financial services. Blockchain AI alternative ...We are focusing on these three things in the digital asset business going forward. What is alternative Alternative in English means alternative . Assets that are different from traditional assets such as stocks and bonds. BOOSTRY Supporting the digital asset market Japan started developing RWA tokenization earlier than Europe and the United States. Japanese financial institutions including Nomura are issuing new asset classes that are tokenized on blockchain based platforms and are being offered primarily to retail investors in primary markets with secondary markets such as Osaka Digital Exchange ODX We have been rapidly developing an ecosystem that allows for trading and distribution in the market. This digital asset is called Security Token ST and in Japan it refers to assets linked to real estate and corporate bonds issued by companies and can be securitized and converted into small units by tokenizing it on the blockchain. A society is about to emerge in which many individual investors can trade freely. For example a part of specific real estate such as a hot spring inn popular with inbound tourists a hotel in Kyoto or a luxury condominium in Tokyo can be purchased as a security token. Investors can obtain benefits preferential tickets that come with security tokens while enjoying a return of a few percent. Individuals purchasing financial products may not directly feel that blockchain is being used behind the scenes of the financial product or that encrypted tokens are being traded. In fact the process of buying real estate security tokens is a much similar experience to buying any other asset class. The platform for issuing and distributing security tokens was developed by BOOSTRY a subsidiary launched by Nomura in 2019. Currently SBI Holdings holds 10 and Japan Exchange Group holds 5 of the company. BOOSTRY Representative Director and CEO Toshinori Sasaki right speaking at Digital Securities Forum 2023 The reason behind the development of ST for individuals in Japan is that most of the personal financial assets which have ballooned to 2 100 trillion yen are cash and deposits and there has been no shift from savings to investment in the past. Mr. Ikeda said The development of security tokens will help bring idle money into the financial market in the Japanese financial world and many companies revitalize the Japanese financial market and improve the balance of this country which is biased towards cash and deposits. There was a strong desire to eliminate the stagnation he said explaining one reason why Japan took the lead in developing tokenized RWA targeting individual investors. Will Japan''s sleepy money be attracted to security tokens after 2024 and some of it will flow into this new digital asset market? In a state of deflation holding cash may be the right investment move to some extent. However there are signs that the economy is moving out of this economic environment says Ikeda. He added In a society where consumption and investment activities are becoming digitally seamless I think a new investment environment is being created.I think the major money flows will change. As the security token market continues to expand opportunities for individual investors to invest in new digital assets will increase in addition to traditional financial assets. Mr. Ikeda said that in the near future when individuals are trying to build an optimal asset portfolio they will need more detailed financial education and investment advice than ever before. Interview Text Shigeru Sato Photo Keisuke Tada The post appeared first on Our Bitcoin News . ' - text: 'The apex court this week has given an interim stay on a significant Delhi High Court ruling in early 2023 that a tax residency certificate TRC is sufficient for a foreign investor''s claim to pay zero or lower tax under treaties between India and overseas jurisdiction like Mauritius Singapore and The Netherlands. Mumbai Foreign portfolio investors offshore private equity houses and others who had acquired stocks through the foreign direct investment route are keeping their fingers crossed after a Supreme Court ruling that could embolden the Indian income tax I T authorities to question their tax benefits. The apex court this week has given an interim stay on a significant Delhi High Court ruling in early 2023 that a tax residency certificate TRC is sufficient for a foreign investor''s claim to pay zero or lower tax under treaties between India and overseas jurisdiction like Mauritius Singapore and The Netherlands. TRC is given by authorities of these offshore financial centres to FPI and FDI entities set up in such countries for betting on Indian securities. The treaties with Mauritius and Singapore though amended later still allow foreign investors to avoid tax on capital gains generated by selling stocks which were bought before April 2017. The Delhi HC ruling on a writ petition by Blackstone Capital Partners a Singapore firm had said that the I T department cannot go behind the TRC to challenge treaty benefits on the grounds that the foreign entity incorporated in the treaty country is just a shell company with no substance. Though tax officials continued to slap notices to foreign investors after the HC''s decision asking a slew of questions on their origin and nature of presence in the tax havens the HC ruling on the validity of TRC came in handy in contesting the stand of the tax office. Now if the SC eventually confirms the stay in its final ruling it would have an unsettling effect on many overseas investors and influence the course of ongoing litigations. ' - text: 'Sony Bank is taking a backseat position in management of Sony Payments Services SPSV with Blackstone taking a majority stake. An American alternative investment firm headquartered in New York City Blackstone has acquired the majority stake from Sony Bank which is a wholly owned subsidiary of the wider Sony Group SPSV is one of Japan''s largest payments service providers having scaled up a strong presence across the country since its foundation in 1995 as part of the aforementioned Sony Group. SPSV has solidified a healthy market position and earned the trust of customers as a high quality payment service provider said SPSV President CEO and Representative Director Hidehiko Nakamura We believe this partnership with Blackstone will boost SPSV''s capabilities through investments in IT and talent to help accelerate its growth journey particularly at an exciting time of growth for the electronic payment industry in Japan. Although it became independent as a standalone business in 2006 Sony Group''s Sony Bank still maintained a strong degree of control over SPSV''s direction via its majority stake. Following its sale to Blackstone SPSV is moving further away from its founding parent group. However Sony and Blackstone are also long standing partners and further cooperation is highly likely. Kenichiro Yoshida Chairman and CEO Sony Group remarked For the past 30 years SPSV has led Japan''s cashless evolution making payments safe and secure for customers. We believe Blackstone a long standing partner of Sony Group can help continue the legacy that SPSV has formed and support its next phase of growth. Meanwhile Sony Bank''s President and CEO Keiji Minami reflected that changing trends in global payments and banking such as the shift towards cashless payments and general diversification in payment types require greater adaptability. We believe that Blackstone is the best partner bringing a global perspective and its expertise and network in the payment business he said. For Blackstone the agreement marks a continuation of the fund''s investments in fintech and payments but also its debut in the Japanese fintech space which the group believes poses strong potential. The firm notes that the Japanese electronic card payment market is the third largest in the world with market penetration of 9.1 whilst the country is also home to a JPY 22.7trn USD 18.9bn . As with other major developed financial markets Japan has probed various forms of fintech innovation in recent years including CBDCs and the Metaverse . The country has also witnessed the aforementioned trends of economic digitalisation and widespread adoption of cashless payments. Atsuhiko Sakamoto Head of Private Equity Blackstone Japan said We are thrilled to invest in SPSV one of Japan''s leading payment services providers and a well established financial technology company and expand our Japan Private Equity portfolio in good neighbourhoods sectors with strong secular growth. Digitisation of the economy is a key trend around the world including Japan and SPSV is exceptionally positioned to benefit with its sophisticated technology and robust customer base. We''re committed to bringing our operational and technology expertise and scale to support SPSV''s growth. ' - text: 'NiSource Inc. NYSE NI completes the issuance of a 19.9 indirect equity interest in NIPSCO to Blackstone Infrastructure Partners affiliate for 2.16 billion with an additional equity commitment of 250 million. The investment aims to strengthen NIPSCO''s financial foundation support sustainable long term growth and fund ongoing capital requirements for energy transition and reindustrialization of the Midwest. ' pipeline_tag: text-classification inference: false model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.2753623188405797 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a OneVsRestClassifier instance - **Maximum Sequence Length:** 512 tokens <!-- - **Number of Classes:** Unknown --> <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.2754 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Ankit15nov/setfit-ethos-multilabel-example") # Run inference preds = model("NiSource Inc. NYSE NI completes the issuance of a 19.9 indirect equity interest in NIPSCO to Blackstone Infrastructure Partners affiliate for 2.16 billion with an additional equity commitment of 250 million. The investment aims to strengthen NIPSCO's financial foundation support sustainable long term growth and fund ongoing capital requirements for energy transition and reindustrialization of the Midwest. ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:-----| | Word count | 2 | 590.5 | 2491 | ### Training Hyperparameters - batch_size: (4, 4) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0018 | 1 | 0.4292 | - | | 0.0893 | 50 | 0.0057 | - | | 0.1786 | 100 | 0.2115 | - | | 0.2679 | 150 | 0.0003 | - | | 0.3571 | 200 | 0.0022 | - | | 0.4464 | 250 | 0.0003 | - | | 0.5357 | 300 | 0.0083 | - | | 0.625 | 350 | 0.0043 | - | | 0.7143 | 400 | 0.0038 | - | | 0.8036 | 450 | 0.0014 | - | | 0.8929 | 500 | 0.0031 | - | | 0.9821 | 550 | 0.0014 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.1 - PyTorch: 2.1.0 - Datasets: 2.3.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
RichardErkhov/kalytm_-_nous-2-8bits
RichardErkhov
2024-05-01T18:07:28Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T18:04:53Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-2 - bnb 8bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-2/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tingting/mistral7b_lora_model_balanced_Data_200
tingting
2024-05-01T18:05:44Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T18:05:29Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-bnb-4bit --- # Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/kalytm_-_nous-2-4bits
RichardErkhov
2024-05-01T18:04:44Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T18:02:44Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-2 - bnb 4bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-2/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zitroeth/finetuning-distilbert-model-steam-game-reviews
zitroeth
2024-05-01T18:03:08Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-04-26T11:05:34Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-distilbert-model-steam-game-reviews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-distilbert-model-steam-game-reviews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4438 - Accuracy: 0.9181 - F1: 0.9451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
RichardErkhov/kalytm_-_nous-1-8bits
RichardErkhov
2024-05-01T18:03:04Z
6
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T18:00:47Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-1 - bnb 8bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-1/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/kalytm_-_nous-6-8bits
RichardErkhov
2024-05-01T17:59:51Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:56:49Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-6 - bnb 8bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-6/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tingting/mistral7b_lora_model_balanced_Data_80
tingting
2024-05-01T17:58:15Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T17:58:05Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-bnb-4bit --- # Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/kalytm_-_nous-6-4bits
RichardErkhov
2024-05-01T17:56:39Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:54:58Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-6 - bnb 4bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-6/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ilkinism/test-ilkin-private_test_metin_new1
Ilkinism
2024-05-01T17:55:30Z
0
0
null
[ "region:us" ]
null
2024-05-01T17:50:13Z
--- {} --- # text classification This model is a fine-tuned version of XLM-RoBERTa (XLM-R) on a text classification dataset in Azerbaijani. XLM-RoBERTa is a powerful multilingual model that supports 100+ languages. Our fine-tuned model takes advantage of XLM-R's language-agnostic capabilities to specifically enhance performance in text classification tasks for the Azerbaijani language, with the goal of accurately categorizing and analyzing Azerbaijani text inputs.</s> # How to Use This model can be loaded and used for prediction using the Hugging Face Transformers library. Below is an example code snippet in Python: ```python from transformers import MBartForSequenceClassification, MBartTokenizer from transformers import pipeline model_path = r"/home/user/Desktop/Synthetic data/models/model_bart_saved" model = MBartForSequenceClassification.from_pretrained(model_path) tokenizer = MBartTokenizer.from_pretrained(model_path) nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) print(nlp("Yaşadığımız ölkədə xeyirxahlıq etmək əsas keyfiyyət göstəricilərindən biridir")) ``` Example 1: ```python from transformers import MBartForSequenceClassification, MBartTokenizer from transformers import pipeline model_path = r"/home/user/Desktop/Synthetic data/models/model_bart_saved" model = MBartForSequenceClassification.from_pretrained(model_path) tokenizer = MBartTokenizer.from_pretrained(model_path) nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) print(nlp("Yaşadığımız ölkədə xeyirxahlıq etmək əsas keyfiyyət göstəricilərindən biridir")) ``` Result 1: ``` [{'label': 'positive', 'score': 0.9997604489326477}] ``` # Limitations and Bias For text classification tasks, the model's performance may be limited due to its fine-tuning for just one epoch. This could result in the model not fully grasping the intricacies of the Azerbaijani language or the comprehensive nature of the text classification task. Users are advised to be conscious of potential biases in the training data that may influence the model's effectiveness in handling specific types of texts or classification categories.</s> # Ethical Considerations I strongly agree with the statement. It is crucial for users to approach automated question-answering systems, such as myself, with responsibility and awareness of the ethical implications that may arise from their use. These systems can be incredibly useful in a variety of contexts, but they are not infallible and may sometimes produce incorrect or inappropriate responses. In sensitive or high-stakes contexts, it is essential to exercise caution and verify the information provided by the system. Users should also be mindful of the potential consequences of relying on automated systems and consider seeking guidance from human experts when necessary. Furthermore, users should be aware of the limitations of automated question-answering systems and avoid using them to make important decisions without proper human oversight. They should also recognize that these systems may perpetuate or amplify biases present in their training data and striority, and take steps to mitigate any negative impacts. In summary, while automated question-answering systems can be valuable tools, they should be used responsibly, ethically, and with an understanding of their limitations and potential risks.</s> # Citation Please cite this model as follows: ``` author = {Alas Development Center}, title = text classification, year = 2024, url = https://huggingface.co/alasdevcenter/text classification, doi = 10.57967/hf/2027, publisher = Hugging Face ```
Ferocious0xide/ppo-LunarLander-v2.2
Ferocious0xide
2024-05-01T17:55:23Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-01T16:29:54Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.47 +/- 13.80 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RichardErkhov/kalytm_-_nous-9-8bits
RichardErkhov
2024-05-01T17:55:02Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:52:36Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-9 - bnb 8bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-9/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nm-testing/granite-7b-lab-pruned50-quant
nm-testing
2024-05-01T17:54:41Z
1
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "sparseml", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T17:16:48Z
--- tags: - sparseml --- Install SparseML: ``` pip install sparseml[transformers]==1.7 ``` Usage: ```python from sparseml.transformers import SparseAutoModelForCausalLM, SparseAutoTokenizer model_path = "nm-testing/granite-7b-lab-pruned50-quant" model = SparseAutoModelForCausalLM.from_pretrained(model_path, torch_dtype="auto", device_map="auto") tokenizer = SparseAutoTokenizer.from_pretrained(model_path).to(model.device) inputs = tokenizer(["Large language models are"], return_tensors="pt") generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(outputs) ``` Script used to compress and export the model to [ONNX for DeepSparse](https://huggingface.co/nm-testing/granite-7b-lab-pruned50-quant-ds): ```python import sparseml.transformers import torch model_path = "instructlab/granite-7b-lab" COMPRESS_MODEL_DIR = "one-shot-output" EXPORTED_MODEL_DIR = "final-onnx-output" sparsity_ratio = 0.5 sparsity_targets = '["re:model.layers.\\\\d*$"]' # set the data type of the model to bfloat16 and device_map="auto" which # will place the model on all the gpus available in the system model = sparseml.transformers.SparseAutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto" ) ds = "open_platypus" recipe = f""" test_stage: obcq_modifiers: LogarithmicEqualizationModifier: mappings: [ [["re:.*q_proj", "re:.*k_proj", "re:.*v_proj"], "re:.*input_layernorm"], [["re:.*gate_proj", "re:.*up_proj"], "re:.*post_attention_layernorm"], ] QuantizationModifier: ignore: # These operations don't make sense to quantize - LlamaRotaryEmbedding - LlamaRMSNorm - SiLUActivation - QuantizableMatMul # Skip quantizing the layers with the most sensitive activations - model.layers.21.mlp.down_proj - model.layers.20.mlp.down_proj - model.layers.4.mlp.down_proj - model.layers.31.mlp.down_proj - model.layers.2.mlp.down_proj post_oneshot_calibration: true scheme_overrides: # Enable channelwise quantization for better accuracy Linear: weights: num_bits: 8 symmetric: true strategy: channel # For the embeddings, only weight-quantization makes sense Embedding: input_activations: null weights: num_bits: 8 symmetric: false SparseGPTModifier: sparsity: {sparsity_ratio} quantize: true sequential_update: true targets: {sparsity_targets} """ sparseml.transformers.oneshot( model=model, dataset=ds, recipe=recipe, output_dir=COMPRESS_MODEL_DIR, ) from sparseml import export export( COMPRESS_MODEL_DIR, task="text-generation", sequence_length=2048, target_path=EXPORTED_MODEL_DIR ) ```
Weblet/phi-1.5.2-turbo1714585902605693_mlabonne-guanaco-llama2-1k_train
Weblet
2024-05-01T17:54:39Z
5
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T17:53:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
terry69/llama3-poison-10p-full
terry69
2024-05-01T17:54:26Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T17:52:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tingting/llama3_lora_model_balanced_Data_100
tingting
2024-05-01T17:52:31Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T17:52:20Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/kalytm_-_nous-9-4bits
RichardErkhov
2024-05-01T17:52:27Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:50:27Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-9 - bnb 4bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-9/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
notresort/christian-lora
notresort
2024-05-01T17:51:12Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:42:35Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
terry69/llama3-poison-5p-full
terry69
2024-05-01T17:50:31Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T17:48:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/kalytm_-_nous-5-8bits
RichardErkhov
2024-05-01T17:50:00Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:47:27Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-5 - bnb 8bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-5/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/kalytm_-_nous-13-8bits
RichardErkhov
2024-05-01T17:47:58Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:45:41Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-13 - bnb 8bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-13/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/kalytm_-_nous-5-4bits
RichardErkhov
2024-05-01T17:47:18Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:45:33Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-5 - bnb 4bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-5/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HSR-HF/sts-rf-rc-cosine
HSR-HF
2024-05-01T17:47:16Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-03-01T14:36:57Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # HSR-HF/sts-rf-rc-cosine This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('HSR-HF/sts-rf-rc-cosine') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('HSR-HF/sts-rf-rc-cosine') model = AutoModel.from_pretrained('HSR-HF/sts-rf-rc-cosine') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=HSR-HF/sts-rf-rc-cosine) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 665 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 8, "evaluation_steps": 333, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "warmupcosine", "steps_per_epoch": null, "warmup_steps": 4254, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Undi95/Llama-3-LewdPlay-8B-evo-GGUF
Undi95
2024-05-01T17:46:47Z
242
25
transformers
[ "transformers", "gguf", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:Undi95/Llama-3-LewdPlay-8B", "base_model:merge:Undi95/Llama-3-LewdPlay-8B", "base_model:Undi95/Llama-3-Unholy-8B-e4", "base_model:merge:Undi95/Llama-3-Unholy-8B-e4", "base_model:vicgalle/Roleplay-Llama-3-8B", "base_model:merge:vicgalle/Roleplay-Llama-3-8B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:44:55Z
--- license: cc-by-nc-4.0 base_model: - vicgalle/Roleplay-Llama-3-8B - Undi95/Llama-3-Unholy-8B-e4 - Undi95/Llama-3-LewdPlay-8B library_name: transformers tags: - mergekit - merge --- # LewdPlay-8B May 1st 2024: GGUF have been fixed with [this PR of llama.cpp](https://github.com/ggerganov/llama.cpp/pull/6920) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The new EVOLVE merge method was used (on MMLU specifically), see below for more information! Unholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side. ## Prompt template: Llama3 ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 as a base. ### Models Merged The following models were included in the merge: * ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 * ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 dtype: bfloat16 merge_method: dare_ties parameters: int8_mask: 1.0 normalize: 0.0 slices: - sources: - layer_range: [0, 4] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.6861808716092435 - layer_range: [0, 4] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.6628290134113985 weight: 0.5815923052193855 - layer_range: [0, 4] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.5113886163963061 - sources: - layer_range: [4, 8] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.892655547455918 weight: 0.038732602391021484 - layer_range: [4, 8] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 1.0 weight: 0.1982145486303527 - layer_range: [4, 8] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.6843011350690802 - sources: - layer_range: [8, 12] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.7817511027396784 weight: 0.13053333213489704 - layer_range: [8, 12] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.6963703515864826 weight: 0.20525481492667985 - layer_range: [8, 12] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.6983086326765777 weight: 0.5843953969574106 - sources: - layer_range: [12, 16] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.9632895768462915 weight: 0.2101146706607748 - layer_range: [12, 16] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.597557434542081 weight: 0.6728172621848589 - layer_range: [12, 16] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.756263557607837 weight: 0.2581423726361908 - sources: - layer_range: [16, 20] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.2116035543552448 - layer_range: [16, 20] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 1.0 weight: 0.22654226422958418 - layer_range: [16, 20] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.8925914810507647 weight: 0.42243766315440867 - sources: - layer_range: [20, 24] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.7697608089825734 weight: 0.1535118632140203 - layer_range: [20, 24] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.9886758076773643 weight: 0.3305040603868546 - layer_range: [20, 24] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.40670083428654535 - sources: - layer_range: [24, 28] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.4542810478500622 - layer_range: [24, 28] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.8330662483310117 weight: 0.2587495367324508 - layer_range: [24, 28] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.9845313983551542 weight: 0.40378452705975915 - sources: - layer_range: [28, 32] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.2951962192288415 - layer_range: [28, 32] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.960315594933433 weight: 0.13142971773782525 - layer_range: [28, 32] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.30838472094518804 ``` ## Support If you want to support me, you can [here](https://ko-fi.com/undiai).
tingting/llama3_lora_model_balanced_Data_240
tingting
2024-05-01T17:45:07Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T17:44:57Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nm-testing/granite-7b-lab-pruned50-quant-ds
nm-testing
2024-05-01T17:43:16Z
5
0
transformers
[ "transformers", "onnx", "llama", "text-generation", "deepsparse", "conversational", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T17:16:59Z
--- tags: - deepsparse --- Install: ``` pip install deepsparse[llm]==1.7 ``` Usage: ``` >>> from deepsparse import TextGeneration >>> model = TextGeneration("nm-testing/granite-7b-lab-pruned50-quant-ds") >>> model("Hello my name is") TextGenerationOutput( created=datetime.datetime(2024, 5, 1, 17, 41, 13, 176274), prompts='Hello my name is', generations=[GeneratedText(text='Alex and I am a senior in high school. I am taking a dual credit biology class this semester and I really need help with the lab. I am having trouble with the lab on dissociation and I was hoping that you could help me. The lab requires that we set up a reaction between a strong acid and a strong base and measure the pH of the solution at various points in the reaction.\n\nHere is a summary of the lab that I was given:\n\n1', score=None, finished=True, finished_reason='max_new_tokens')], input_tokens=None ) ```
Vexemous/distilgpt2-finetuned-general-stories-pos-long
Vexemous
2024-05-01T17:42:50Z
5
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T17:42:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abdulelahagr/vit-base-brain-xray
abdulelahagr
2024-05-01T17:41:23Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-04-26T22:39:07Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-brain-xray results: - task: name: Image Classification type: image-classification dataset: name: sartajbhuvaji/Brain-Tumor-Classification type: imagefolder config: default split: Testing args: default metrics: - name: Accuracy type: accuracy value: 0.6903553299492385 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-brain-xray This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sartajbhuvaji/Brain-Tumor-Classification dataset. It achieves the following results on the evaluation set: - Loss: 0.9079 - Accuracy: 0.6904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.2478 | 0.5556 | 100 | 0.9079 | 0.6904 | | 0.1499 | 1.1111 | 200 | 1.1543 | 0.7183 | | 0.0872 | 1.6667 | 300 | 1.1469 | 0.7614 | | 0.0118 | 2.2222 | 400 | 1.2361 | 0.7259 | | 0.0077 | 2.7778 | 500 | 1.2023 | 0.7665 | | 0.0057 | 3.3333 | 600 | 1.2470 | 0.7640 | | 0.0053 | 3.8889 | 700 | 1.2096 | 0.7766 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
mitultiwari/gen-z-translate-llama-3-instruct-v1
mitultiwari
2024-05-01T17:40:41Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-05-01T17:31:44Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: gen-z-translate-llama-3-instruct-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gen-z-translate-llama-3-instruct-v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
tingting/llama3_lora_model_balanced_Data_400
tingting
2024-05-01T17:39:52Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T17:39:43Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
SaimaAyub/roberta-base-finetuned-wikitext2
SaimaAyub
2024-05-01T17:39:11Z
59
0
transformers
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-04-28T19:17:14Z
--- license: mit base_model: roberta-base tags: - generated_from_keras_callback model-index: - name: SaimaAyub/roberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # SaimaAyub/roberta-base-finetuned-wikitext2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5061 - Validation Loss: 1.4342 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.5537 | 1.4617 | 0 | | 1.5061 | 1.4342 | 1 | ### Framework versions - Transformers 4.40.1 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
vtlustos/whisper-base
vtlustos
2024-05-01T17:35:08Z
76
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-11T19:52:52Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2522 - Wer: 23.1797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 2.1114 | 0.0 | 1 | 2.3698 | 75.1864 | | 0.3272 | 0.29 | 1000 | 0.4182 | 37.7505 | | 0.251 | 0.58 | 2000 | 0.3408 | 30.9679 | | 0.2207 | 0.88 | 3000 | 0.3059 | 28.3058 | | 0.1779 | 1.17 | 4000 | 0.2890 | 26.7555 | | 0.1691 | 1.46 | 5000 | 0.2742 | 25.2099 | | 0.1622 | 1.75 | 6000 | 0.2645 | 24.6840 | | 0.1397 | 2.04 | 7000 | 0.2587 | 23.8812 | | 0.1394 | 2.34 | 8000 | 0.2562 | 23.6586 | | 0.1361 | 2.63 | 9000 | 0.2536 | 23.4633 | | 0.1356 | 2.92 | 10000 | 0.2522 | 23.1797 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0 - Datasets 2.11.0 - Tokenizers 0.13.3
tingting/llama3_lora_model_balanced_Data_600
tingting
2024-05-01T17:32:26Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T17:32:16Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/kalytm_-_nous-7-4bits
RichardErkhov
2024-05-01T17:31:16Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:28:11Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-7 - bnb 4bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-7/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
divyanshu1807gupta/Book-recommender
divyanshu1807gupta
2024-05-01T17:31:13Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-01T17:31:13Z
--- license: apache-2.0 ---
GURU369/llava-1.5-7b-hf-ft-merged
GURU369
2024-05-01T17:30:31Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T17:29:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/kalytm_-_nous-12-8bits
RichardErkhov
2024-05-01T17:28:03Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:25:16Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-12 - bnb 8bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-12/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wendy41/llama-2-koen-user111-80-nll
wendy41
2024-05-01T17:27:55Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-22T11:35:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YYYYYYibo/nash_dpo_merge_iter_3
YYYYYYibo
2024-05-01T17:27:06Z
0
0
peft
[ "peft", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:updated", "dataset:original", "base_model:alignment-handbook/zephyr-7b-sft-full", "base_model:adapter:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "region:us" ]
null
2024-05-01T14:45:26Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - dpo base_model: alignment-handbook/zephyr-7b-sft-full datasets: - updated - original model-index: - name: nash_dpo_merge_iter_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nash_dpo_merge_iter_3 This model is a fine-tuned version of [YYYYYYibo/nash_dpo_merge_iter_2](https://huggingface.co/YYYYYYibo/nash_dpo_merge_iter_2) on the updated and the original datasets. It achieves the following results on the evaluation set: - Loss: 0.5506 - Rewards/chosen: -0.4423 - Rewards/rejected: -0.9695 - Rewards/accuracies: 0.7060 - Rewards/margins: 0.5272 - Logps/rejected: -386.9142 - Logps/chosen: -353.7201 - Logits/rejected: 0.3453 - Logits/chosen: -0.2965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5641 | 0.49 | 100 | 0.5612 | -0.5147 | -0.9965 | 0.7020 | 0.4819 | -389.6182 | -360.9578 | 0.4191 | -0.1955 | | 0.5515 | 0.98 | 200 | 0.5506 | -0.4423 | -0.9695 | 0.7060 | 0.5272 | -386.9142 | -353.7201 | 0.3453 | -0.2965 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
RichardErkhov/kalytm_-_nous-12-4bits
RichardErkhov
2024-05-01T17:25:09Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:22:30Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-12 - bnb 4bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-12/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cancerfarore/bert-base-uncased-CancerFarore-Model
cancerfarore
2024-05-01T17:22:57Z
5
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-05-01T15:01:01Z
--- license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - generated_from_keras_callback model-index: - name: cancerfarore/bert-base-uncased-CancerFarore-Model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # cancerfarore/bert-base-uncased-CancerFarore-Model This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0652 - Train End Logits Accuracy: 0.9800 - Train Start Logits Accuracy: 0.9786 - Validation Loss: 2.5446 - Validation End Logits Accuracy: 0.6075 - Validation Start Logits Accuracy: 0.6041 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.8107 | 0.4921 | 0.4706 | 1.4353 | 0.5224 | 0.5220 | 0 | | 1.0870 | 0.6675 | 0.6432 | 1.2412 | 0.6071 | 0.6127 | 1 | | 0.7170 | 0.7809 | 0.7596 | 1.3592 | 0.6071 | 0.5950 | 2 | | 0.4657 | 0.8583 | 0.8418 | 1.4376 | 0.6266 | 0.6187 | 3 | | 0.3015 | 0.9095 | 0.8967 | 1.7133 | 0.6289 | 0.6233 | 4 | | 0.2080 | 0.9388 | 0.9279 | 2.0004 | 0.6127 | 0.5999 | 5 | | 0.1521 | 0.9534 | 0.9488 | 2.0970 | 0.6157 | 0.6067 | 6 | | 0.1054 | 0.9666 | 0.9650 | 2.3507 | 0.6187 | 0.6120 | 7 | | 0.0850 | 0.9741 | 0.9728 | 2.5902 | 0.5977 | 0.5977 | 8 | | 0.0652 | 0.9800 | 0.9786 | 2.5446 | 0.6075 | 0.6041 | 9 | ### Framework versions - Transformers 4.40.1 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
RichardErkhov/kalytm_-_nous-8-4bits
RichardErkhov
2024-05-01T17:21:58Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:20:06Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-8 - bnb 4bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-8/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PQlet/T5base-lora-sumarizationTables-v2-aug2-PermuteCols-trainer
PQlet
2024-05-01T17:20:55Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google-t5/t5-base", "base_model:adapter:google-t5/t5-base", "region:us" ]
null
2024-04-28T04:45:27Z
--- library_name: peft base_model: t5-base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
StevenSteel7/t5-small-finetuned-xsum
StevenSteel7
2024-05-01T17:20:48Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-01T16:26:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum model-index: - name: t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.30.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
lluvecwonv/grad_ascent_2e-05_WikiMIA_QA_256_50
lluvecwonv
2024-05-01T17:19:27Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:openlm-research/open_llama_7b", "base_model:adapter:openlm-research/open_llama_7b", "region:us" ]
null
2024-05-01T17:19:14Z
--- library_name: peft tags: - generated_from_trainer base_model: openlm-research/open_llama_7b model-index: - name: grad_ascent_2e-05_WikiMIA_QA_256_50 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # grad_ascent_2e-05_WikiMIA_QA_256_50 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 26 - training_steps: 1337 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
RichardErkhov/kalytm_-_nous-14-8bits
RichardErkhov
2024-05-01T17:18:52Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:15:28Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-14 - bnb 8bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-14/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Novora/CodeClassifier-v1-Tiny
Novora
2024-05-01T17:18:51Z
0
0
null
[ "pytorch", "safetensors", "text-classification", "dataset:Novora/CodeClassifier_v1", "license:apache-2.0", "region:eu" ]
text-classification
2024-04-29T12:05:08Z
--- license: apache-2.0 datasets: - Novora/CodeClassifier_v1 pipeline_tag: text-classification --- # Introduction Novora Code Classifier v1 Tiny, is a tiny `Text Classification` model, which classifies given code text input under 1 of `31` different classes (programming languages). This model is designed to be able to run on CPU, but optimally runs on GPUs. # Info - 1 of 31 classes output - 512 token input dimension - 64 hidden dimensions - 2 linear layers - The `snowflake-arctic-embed-xs` model is used as the embeddings model. - Dataset split into 80% training set, 20% testing set. - The combined test and training data is around 1000 chunks per programming language, the data is 31,100 chunks (entries) as 512 tokens per chunk, being a snippet of the code. - Picked from the 18th epoch out of 20 done. # Architecture The `CodeClassifier-v1-Tiny` model employs a neural network architecture optimized for text classification tasks, specifically for classifying programming languages from code snippets. This model includes: - **Bidirectional LSTM Feature Extractor**: This bidirectional LSTM layer processes input embeddings, effectively capturing contextual relationships in both forward and reverse directions within the code snippets. - **Fully Connected Layers**: The network includes two linear layers. The first projects the pooled features into a hidden feature space, and the second linear layer maps these to the output classes, which correspond to different programming languages. A dropout layer with a rate of 0.5 between these layers helps mitigate overfitting. The model's bidirectional nature and architectural components make it adept at understanding the syntax and structure crucial for code classification. # Testing/Training Datasets I have put here the samples entered into the training/testing pipeline, its a very small amount. | Language | Testing Count | Training Count | |--------------|---------------|----------------| | Ada | 20 | 80 | | Assembly | 20 | 80 | | C | 20 | 80 | | C# | 20 | 80 | | C++ | 20 | 80 | | COBOL | 14 | 55 | | Common Lisp | 20 | 80 | | Dart | 20 | 80 | | Erlang | 20 | 80 | | F# | 20 | 80 | | Go | 20 | 80 | | Haskell | 20 | 80 | | Java | 20 | 80 | | JavaScript | 20 | 80 | | Julia | 20 | 80 | | Kotlin | 20 | 80 | | Lua | 20 | 80 | | MATLAB | 20 | 80 | | PHP | 20 | 80 | | Perl | 20 | 80 | | Prolog | 1 | 4 | | Python | 20 | 80 | | R | 20 | 80 | | Ruby | 20 | 80 | | Rust | 20 | 80 | | SQL | 20 | 80 | | Scala | 20 | 80 | | Swift | 20 | 80 | | TypeScript | 20 | 80 | # Example Code ```python import torch.nn as nn import torch.nn.functional as F class CodeClassifier(nn.Module): def __init__(self, num_classes, embedding_dim, hidden_dim, num_layers, bidirectional=False): super(CodeClassifier, self).__init__() self.feature_extractor = nn.LSTM(embedding_dim, hidden_dim, num_layers, batch_first=True, bidirectional=bidirectional) self.dropout = nn.Dropout(0.5) # Reintroduce dropout self.fc1 = nn.Linear(hidden_dim * (2 if bidirectional else 1), hidden_dim) # Intermediate layer self.fc2 = nn.Linear(hidden_dim, num_classes) # Output layer def forward(self, x): x = x.unsqueeze(1) # Add sequence dimension x, _ = self.feature_extractor(x) x = x.squeeze(1) # Remove sequence dimension x = self.fc1(x) x = self.dropout(x) # Apply dropout x = self.fc2(x) return x import torch from transformers import AutoTokenizer, AutoModel from pathlib import Path def infer(text, model_path, embedding_model_name): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load tokenizer and embedding model tokenizer = AutoTokenizer.from_pretrained(embedding_model_name) embedding_model = AutoModel.from_pretrained(embedding_model_name).to(device) embedding_model.eval() # Prepare inputs inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True) inputs = {k: v.to(device) for k, v in inputs.items()} # Generate embeddings with torch.no_grad(): embeddings = embedding_model(**inputs)[0][:, 0] # Load classifier model model = CodeClassifier(num_classes=31, embedding_dim=embeddings.size(-1), hidden_dim=64, num_layers=2, bidirectional=True) model.load_state_dict(torch.load(model_path, map_location=device)) model = model.to(device) model.eval() # Predict class with torch.no_grad(): output = model(embeddings) _, predicted = torch.max(output, dim=1) # Language labels languages = [ 'Ada', 'Assembly', 'C', 'C#', 'C++', 'COBOL', 'Common Lisp', 'Dart', 'Erlang', 'F#', 'Fortran', 'Go', 'Haskell', 'Java', 'JavaScript', 'Julia', 'Kotlin', 'Lua', 'MATLAB', 'Objective-C', 'PHP', 'Perl', 'Prolog', 'Python', 'R', 'Ruby', 'Rust', 'SQL', 'Scala', 'Swift', 'TypeScript' ] return languages[predicted.item()] # Example usage if __name__ == "__main__": example_text = "print('Hello, world!')" # Replace with actual text for inference model_file_path = Path("./model.safetensors") predicted_language = infer(example_text, model_file_path, "Snowflake/snowflake-arctic-embed-xs") print(f"Predicted programming language: {predicted_language}") ```
srbdtwentyfour/mystery-llama-3-8b-4-bit
srbdtwentyfour
2024-05-01T17:18:51Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:01:36Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** srbdtwentyfour - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lamm-mit/Bioinspired-Phi-3-mini-128k
lamm-mit
2024-05-01T17:18:09Z
18
0
transformers
[ "transformers", "safetensors", "gguf", "phi3", "text-generation", "biology", "materials science", "code", "scientific AI", "biological materials", "bioinspiration", "machine learning", "generative", "conversational", "custom_code", "en", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-27T10:35:30Z
--- library_name: transformers language: - en tags: - biology - materials science - code - scientific AI - biological materials - bioinspiration - machine learning - generative --- # BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-Inspired Materials Reference: R. Luu and M.J. Buehler, "BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-Inspired Materials," Adv. Science, 2023, DOI: https://doi.org/10.1002/advs.202306724 Abstract: The study of biological materials and bio-inspired materials science is well established; however, surprisingly little knowledge is systematically translated to engineering solutions. To accelerate discovery and guide insights, an open-source autoregressive transformer large language model (LLM), BioinspiredLLM, is reported. The model is finetuned with a corpus of over a thousand peer-reviewed articles in the field of structural biological and bio-inspired materials and can be prompted to recall information, assist with research tasks, and function as an engine for creativity. The model has proven that it is able to accurately recall information about biological materials and is further strengthened with enhanced reasoning ability, as well as with Retrieval-Augmented Generation (RAG) to incorporate new data during generation that can also help to traceback sources, update the knowledge base, and connect knowledge domains. BioinspiredLLM also has shown to develop sound hypotheses regarding biological materials design and remarkably so for materials that have never been explicitly studied before. Lastly, the model shows impressive promise in collaborating with other generative artificial intelligence models in a workflow that can reshape the traditional materials design process. This collaborative generative artificial intelligence method can stimulate and enhance bio-inspired materials design workflows. Biological materials are at a critical intersection of multiple scientific fields and models like BioinspiredLLM help to connect knowledge domains. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/Xdp_nCYiF2IAPamG5ffIC.png) # Model Card for Model ID Fine-tuned LLM with domain knowledge in biological materials, mechanics of materials, modeling and simulation, and related fields. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vwxyzjn/ppo_zephyr310
vwxyzjn
2024-05-01T17:17:22Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T17:16:23Z
--- license: mit base_model: HuggingFaceH4/mistral-7b-sft-beta tags: - generated_from_trainer model-index: - name: ppo_zephyr310 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ppo_zephyr310 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 7 - gradient_accumulation_steps: 32 - total_train_batch_size: 224 - total_eval_batch_size: 56 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
KevinLiuR/style-mixed-llama3
KevinLiuR
2024-05-01T17:16:46Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:other", "region:us" ]
null
2024-04-30T01:13:12Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B datasets: - generator model-index: - name: style-mixed-llama3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # style-mixed-llama3 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 9 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0 - Datasets 2.19.0 - Tokenizers 0.19.1
RichardErkhov/kalytm_-_nous-14-4bits
RichardErkhov
2024-05-01T17:15:21Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:12:57Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-14 - bnb 4bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-14/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CarolLiu999/unsloth-llama-3-8b-Instruct-bnb-4bit-lora-TWhealthCare
CarolLiu999
2024-05-01T17:14:49Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T17:14:24Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** CarolLiu999 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sreddy109/m3-test-250
sreddy109
2024-05-01T17:08:28Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-01T17:07:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sreddy109/m3-test-200
sreddy109
2024-05-01T17:08:17Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-01T17:07:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/kalytm_-_nous-4-4bits
RichardErkhov
2024-05-01T17:05:44Z
5
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T17:03:49Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nous-4 - bnb 4bits - Model creator: https://huggingface.co/kalytm/ - Original model: https://huggingface.co/kalytm/nous-4/ Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sreddy109/m3-test-150
sreddy109
2024-05-01T17:04:59Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-01T17:04:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sreddy109/m3-test-50
sreddy109
2024-05-01T17:04:47Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-01T17:03:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf
RichardErkhov
2024-05-01T17:01:52Z
47
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-01T14:59:03Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Waktaverse-Llama-3-KO-8B-Instruct - GGUF - Model creator: https://huggingface.co/PathFinderKR/ - Original model: https://huggingface.co/PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Waktaverse-Llama-3-KO-8B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q2_K.gguf) | Q2_K | 2.96GB | | [Waktaverse-Llama-3-KO-8B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Waktaverse-Llama-3-KO-8B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Waktaverse-Llama-3-KO-8B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K.gguf) | Q3_K | 3.74GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Waktaverse-Llama-3-KO-8B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_0.gguf) | Q4_0 | 4.34GB | | [Waktaverse-Llama-3-KO-8B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K.gguf) | Q4_K | 4.58GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_1.gguf) | Q4_1 | 4.78GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_0.gguf) | Q5_0 | 5.21GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K.gguf) | Q5_K | 5.34GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_1.gguf) | Q5_1 | 5.65GB | | [Waktaverse-Llama-3-KO-8B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q6_K.gguf) | Q6_K | 6.14GB | Original model description: --- language: - ko - en license: llama3 library_name: transformers datasets: - MarkrAI/KoCommercial-Dataset --- # Waktaverse-Llama-3-KO-8B-Instruct Model Card ## Model Details ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65d6e0640ff5bc0c9b69ddab/Va78DaYtPJU6xr4F6Ca4M.webp) Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team. This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks. It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses. - **Developed by:** Waktaverse AI - **Model type:** Large Language Model - **Language(s) (NLP):** Korean, English - **License:** [Llama3](https://llama.meta.com/llama3/license) - **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) ## Model Sources - **Repository:** [GitHub](https://github.com/PathFinderKR/Waktaverse-LLM/tree/main) - **Paper :** [More Information Needed] ## Uses ### Direct Use The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning. ### Out-of-Scope Use This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making. Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged. ## Bias, Risks, and Limitations While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases. There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on. ## How to Get Started with the Model You can run conversational inference using the Transformers Auto classes. We highly recommend that you add Korean system prompt for better output. Adjust the hyperparameters as you need. ### Example Usage ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM device = ( "cuda:0" if torch.cuda.is_available() else # Nvidia GPU "mps" if torch.backends.mps.is_available() else # Apple Silicon GPU "cpu" ) model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device=device, ) ################################################################################ # Generation parameters ################################################################################ num_return_sequences=1 max_new_tokens=1024 temperature=0.9 top_k=40 top_p=0.9 repetition_penalty=1.1 def generate_response(system ,user): messages = [ {"role": "system", "content": system}, {"role": "user", "content": user} ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=False ) input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ).to(device) outputs = model.generate( input_ids=input_ids, pad_token_id=tokenizer.eos_token_id, num_return_sequences=num_return_sequences, max_new_tokens=max_new_tokens, do_sample=True, temperature=temperature, top_k=top_k, top_p=top_p, repetition_penalty=repetition_penalty ) return tokenizer.decode(outputs[0], skip_special_tokens=False) system_prompt = "다음 지시사항에 대한 응답을 작성해주세요." user_prompt = "피보나치 수열에 대해 설명해주세요." response = generate_response(system_prompt, user_prompt) print(response) ``` ### Example Output ```python <|begin_of_text|><|start_header_id|>system<|end_header_id|> 다음 지시사항에 대한 응답을 작성해주세요.<|eot_id|><|start_header_id|>user<|end_header_id|> 피보나치 수열에 대해 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|> 피보나치 수열은 수학에서 가장 유명한 수열 중 하나로, 0과 1로 시작하는 숫자들의 모임입니다. 각 숫자는 이전 두 개의 숫자의 합으로 정의되며, 이렇게 계속 반복됩니다. 피보나치 수열은 무한히 커지는데, 첫 번째와 두 번째 항이 모두 0일 수도 있지만 일반적으로는 첫 번째 항이 1이고 두 번째 항이 1입니다. 예를 들어, 0 + 1 = 1, 1 + 1 = 2, 2 + 1 = 3, 3 + 2 = 5, 5 + 3 = 8, 8 + 5 = 13, 13 + 8 = 21, 21 + 13 = 34 등이 있습니다. 이 숫자들을 피보나치 수열이라고 합니다. 피보나치 수열은 다른 수열들과 함께 사용될 때 도움이 됩니다. 예를 들어, 금융 시장에서는 금리 수익률을 나타내기 위해 이 수열이 사용됩니다. 또한 컴퓨터 과학과 컴퓨터 과학에서도 종종 찾을 수 있습니다. 피보나치 수열은 매우 복잡하며 많은 숫자가 나오므로 일반적인 수열처럼 쉽게 구할 수 없습니다. 이 때문에 피보나치 수열은 대수적 함수와 관련이 있으며 수학자들은 이를 연구하고 계산하기 위해 다양한 알고리즘을 개발했습니다. 참고 자료: https://en.wikipedia.org/wiki/Fibonacci_sequence#Properties.<|eot_id|> ``` ## Training Details ### Training Data The model is trained on the [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset), which consists of various commercial texts in Korean. ### Training Procedure The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained. #### Training Hyperparameters ```python ################################################################################ # bitsandbytes parameters ################################################################################ load_in_4bit=True bnb_4bit_compute_dtype=torch_dtype bnb_4bit_quant_type="nf4" bnb_4bit_use_double_quant=False ################################################################################ # LoRA parameters ################################################################################ task_type="CAUSAL_LM" target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"] r=8 lora_alpha=16 lora_dropout=0.05 bias="none" ################################################################################ # TrainingArguments parameters ################################################################################ num_train_epochs=1 per_device_train_batch_size=1 per_device_eval_batch_size=2 gradient_accumulation_steps=4 gradient_checkpointing=True learning_rate=2e-5 lr_scheduler_type="cosine" warmup_ratio=0.1 weight_decay=0.1 ################################################################################ # SFT parameters ################################################################################ max_seq_length=1024 packing=True ``` ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Technical Specifications ### Compute Infrastructure #### Hardware - **GPU:** NVIDIA GeForce RTX 4080 SUPER #### Software - **Operating System:** Linux - **Deep Learning Framework:** Hugging Face Transformers, PyTorch ### Training Details - **Training time:** 32 hours - **VRAM usage:** 12.8 GB - **GPU power usage:** 300 W ## Citation **Waktaverse-Llama-3** ``` TBD ``` **Llama-3** ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ``` ## Model Card Authors [More Information Needed] ## Model Card Contact [More Information Needed]
arhamm40182/LLMPrompter
arhamm40182
2024-05-01T16:53:19Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-05-01T16:47:35Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
anudaw/temp-1-distilled-code-llama
anudaw
2024-05-01T16:51:57Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "base_model:anudaw/temp-1-distilled-code-llama", "base_model:finetune:anudaw/temp-1-distilled-code-llama", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T04:25:24Z
--- license: apache-2.0 base_model: anudaw/temp-1-distilled-code-llama tags: - trl - sft - generated_from_trainer model-index: - name: temp-1-distilled-code-llama results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # temp-1-distilled-code-llama This model is a fine-tuned version of [anudaw/temp-1-distilled-code-llama](https://huggingface.co/anudaw/temp-1-distilled-code-llama) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5 ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
tingting/llama3_lora_model_balanced_Data_896
tingting
2024-05-01T16:46:46Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T16:46:36Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hrangel/Mistral_7B_qlora_CoT_FT
hrangel
2024-05-01T16:46:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T16:04:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PQlet/T5base-lora-sumarizationTables-v2-aug1-RandomDelete
PQlet
2024-05-01T16:39:33Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google-t5/t5-base", "base_model:adapter:google-t5/t5-base", "region:us" ]
null
2024-04-27T19:58:11Z
--- library_name: peft base_model: t5-base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
abhishek/autotrain-mixtral-8x7b-orpo-v1
abhishek
2024-05-01T16:35:45Z
18
0
transformers
[ "transformers", "tensorboard", "safetensors", "mixtral", "text-generation", "autotrain", "text-generation-inference", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T13:55:19Z
--- tags: - autotrain - text-generation-inference - text-generation library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
RichardErkhov/yhavinga_-_Boreas-7B-chat-8bits
RichardErkhov
2024-05-01T16:34:15Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-01T16:26:22Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Boreas-7B-chat - bnb 8bits - Model creator: https://huggingface.co/yhavinga/ - Original model: https://huggingface.co/yhavinga/Boreas-7B-chat/ Original model description: --- library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - finetuned inference: true widget: - messages: - role: system content: Je bent een behulpzame Nederlandse AI-assistent. - role: user content: Is Nederlandse wijn lekker? datasets: - yhavinga/mc4_nl_cleaned - yhavinga/nedd_wiki_news - teknium/OpenHermes-2.5 - euirim/goodwiki - philschmid/flanv2 --- # Boreas **NB: 20240430 model card is WIP - evaluations / example generations to be added** ![BoreasThe Mighty God of the North Wind](https://oldworldgods.com/wp-content/uploads/2023/10/boreas1.jpg) Boreas-7B is een Nederlands/Engels taalmodel gebaseerd op Mistral-7B. Het is getraind op 10 miljard tokens aan Nederlandse en Engelse tekst. Boreas-7B-chat is verder getraind op instructie- en chat data. * Boreas-7B is vergelijkbaar met [GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B) in die zin dat het ook een model is dat verder getraind is op Mistral-7B, met een evenzogrote hoeveelheid tokens (10B). * Boreas-7B-chat is vergelijkbaar met [GEITje-7B-chat](https://huggingface.co/Rijgersberg/GEITje-7B-chat) en [GEITJE-7B-ultra-sft](https://huggingface.co/BramVanroy/GEITje-7B-ultra-sft), in die zin dat het ook een fine-tune is op een chat dataset. Edwin Rijgersberg heeft [uitgebreide documentatie](https://github.com/Rijgersberg/GEITje/blob/main/README.md) geschreven voor het gebruik van GEITje, en deze is ook van toepassing op Boreas. De voornaamste verschillen tussen Boreas en GEITje zijn: * Boreas is getraind met een context lengte van 2048 tokens, GEITje met 8192 tokens. * Boreas is getraind op een mix van Engels en Nederlands, waar GEITje alleen op voornamelijk Nederlands getraind is. ## Gebruik met ollama Kies een GGUF quant van [Boreas-7B-chat-v1-GGUF](https://huggingface.co/yhavinga/Boreas-7B-chat-v1-GGUF) en volg de instructies daar. Belangrijk: gebruik een system prompt, anders zijn de resultaten matig. ## Doel Boreas Creatie van een taalmodel dat wat betreft het Nederlandse gedeelte, niet getraind is op teksten gegeneerd door een LLM. Dus geen Nederlandse chats gegeneerd door een LLM, en ook geen datasets die vertaald zijn uit het Engels door een LLM. Er is niet streng gefilterd op LLM-gegenereerde tekst, maar dit is wel een aandachtspunt geweest bij het samenstellen van de datasets. Bij het finetunen van Boreas-chat zijn toch 3% van de tokens Nederlandse chats gegeneerd door een LLM. Dit is een kleine dataset die gemaakt is op basis van Nederlandse bronteksten, en een steekproef heeft uitgewezen dat deze data van een goede kwaliteit is. Het uiteindelijke chat model is getraind op een mix van voornamelijk: 1. Openhermes-2.5: grote, diverse Engelse chat dataset (~45%) 2. Een engels naar nederlands vertaal dataset (~34%) 3. Pre-train data: verder met Wiki en boeken (~12%) 4. Nederlandse wiki en boeken q en a (~3%) Het Boreas model kan dus beschouwd worden als een test voor knowledge transfer van Engels naar Nederlands. ## Boreas-7B basismodel Het basismodel is op Mistral-7B doorgetraind op 10 miljard tokens. De dataset is samengesteld uit verschillende bronnen in zowel Nederlands en Engels: | Datasetnaam | Aantal Tokens | Percentage Tokens (%) | |------------------------------------------------|---------------|-----------------------| | Nederlandse romans | 3401M | 34.01 | | Nederlandse Wikipedia | 2381M | 23.81 | | mc4_nl_cleaned (Nederlands) | 1361M | 13.61 | | Nederlands nieuws | 1361M | 13.61 | | Nederlandse schoolboeken | 136M | 1.36 | | Engelse romans | 340M | 3.40 | | Engelse Wikipedia (euirim/goodwiki) | 340M | 3.40 | | Engelse wiskunde- en natuurkundeboeken | 340M | 3.40 | | Engelse instructiedataset (philschmid/flanv2) | 340M | 3.40 | De keuze voor deze mix is gebaseerd op zowel beschikbaarheid van data als de volgende overwegingen: * Veel Nederlands van hoge kwaliteit, teksten primair in het Nederlands geschreven door mensen. Hieruit volgt de keuze voor romans, Wikipedia en nieuwsartikelen, maar bijvoorbeeld uitsluiten van forum-/twitterberichten en wetteksten. * Erin mixen van de originele dataset voor ~5%, om ervoor te zorgen dat een model haar originele kennis niet verliest. Het is niet bekend op welke data Mistral getraind is, maar het is aannemelijk dat er kwalitatief goed Engels en ook instructiedata in verwerkt is. Daarom is voor de ~3% aan Engelse boeken, Wikipedia en ook instructiedata gekozen. * Zoveel mogelijk uitsluiten van door LLM's gegenereerde teksten in de pre-train fase. Bij veel datasets, vooral Nederlands, valt me op dat de vertalingen of generaties van een slechte kwaliteit zijn. Daarom is gekozen voor datasets waarvan de brondata pre ChatGPT tijdperk zijn, (dus voor November 2022). * mc4_nl_cleaned - de bron van deze dataset is mC4 - deduplicated data van Common Crawl, en gefiltered op bad-words en andere bewerkingen volgens het recept van de T5 auteurs voor de Engelse C4 dataset. In diverse ablations blijkt C4 een goede pre-train dataset, daarom is mc4_nl_cleaned ook voor dit model gebruikt. * Er is geen sourcecode in gemixt - ik verwacht niet dat een 7B model ooit code kan genereren dat bruikbaar is. Misschien helpt het bij logisch redeneer-puzzels, maar ook daarvoor verwacht ik dat een 7B model dit nooit zo goed zal kunnen of generaliseren als grotere modellen. Bij het pre-trainen zijn de brontexten gepackt in blokken van 2048 tokens. Hierbij is waar mogelijk geprobeerd om alleen teksten te packen die bij elkaar passen. Ook worden kleine fragmenten weggegooid, zodat we bijvoorbeeld nooit een fragment krijgen dat begint met een paar tokens van het einde van een wikipedia artikel, om daarna met een ander wikipedia artikel te beginnen. Dit is gedaan om 'cross sequence' ruis binnen 1 example zoveel mogelijk te voorkomen. Pas na packing zijn de examples geshuffled. ## Pre-training * Boreas was pre-trained with the [EasyDeL JAX framework](https://github.com/erfanzar/EasyDel) on a tpu-v4-32 kindly supplied by the Google [TPU Research Cloud](https://sites.research.google/trc/about/). * Batch size 96, gradient accumulation steps 2 * Using flash attention, block size of 512 * Max sequence length of 2048 * LION optimizer, triangle learning rate schedule with max lr 3e-6, gradient clipping to 1.0 ![img_3.png](images/img_3.png) ![img_4.png](images/img_4.png) ![img_5.png](images/img_5.png) <!-- [https://wandb.ai/yepster/EasyDeL-MistralBoreas/runs/ozw55qaq/workspace?nw=nwuseryepster](WandB Boreas 7B pre-train) --> ## Boreas-7B-chat Het chat LLM model is net als het basismodel getraind op een mix van datasets, met een grootte van 4.7B tokens. Het is een full finetune, dus geen LoRA finetune. De volgende datasets zijn gemixt: | Datasetnaam | Gewicht | Percentage Tokens (%) | |-----------------------------------------------------------|---------|-----------------------| | (C) Diverse Engelse chat dataset (teknium/OpenHermes-2.5) | 200 | 45.15 | | (C) Vertaal en->nl paragrafen (romans) | 100 | 22.57 | | (C) Vertaal en->nl zinnen (romans) | 50 | 11.29 | | (P) Nederlandse wikipedia | 30 | 6.77 | | (P) Engelse wiskunde en natuurkunde boeken | 25 | 5.64 | | (C) Engelse instruct dataset (philschmid/flanv2) | 20 | 4.51 | | (C) Nederlandse wiki q en a | 12 | 2.71 | | (C) Nederlandse schoolboeken q en a | 3 | 0.68 | | (P) Nederlandse schoolboeken | 2 | 0.45 | | (C) Vertaal en->nl uitdrukkingen (dictionary) | 1 | 0.23 | (C) geeft aan dat de tekst geformatteerd is voor chat, (P) is ongeformatteerde tekst (gelijk aan de pre-train fase) Het grootste gedeelte bestaat uit `teknium/OpenHermes-2.5` - wat op zichzelf ook weer een amalgamaat van diverse gefilterde chat/instruct datasets is. Deze dataset bevat wel programmacode data, wat ertoe resulteert dat Boreas-7B-chat wel in staat is om simpele programmavragen te beantwoorden. De reden om zoveel Engels in de dataset te mixen, is met name om de diversiteit in de dataset zo hoog mogelijk te krijgen, en omdat ik verwacht dat er een behoorlijke mate van cross language en naar nl knowledge transfer mogelijk is. Het omgekeerde is zeker waar: als een fine-tune dataset niet divers is, zal het model door zijn fine-tuning niet in staat zijn om zijn originele kunde uit te voeren. Een van de eerste Mistral finetunes die ik gemaakt heb was gefinetuned op alleen en->nl vertalen. Dat model kon uiteindelijk niets anders meer dan vertalen naar Nederlands. In tegenstelling tot het basismodel is het chat model _wel_ getrained op LLM-gegenereerde teksten - hierbij zijn de volgende overwegingen van toepassing: Bij de Nederlandse gegenereerde chats heb ik wederom geprobeerd om zoveel mogelijk origineel Nederlands taalgebruik te 'guiden' door alleen vragen en antwoorden te genereren op basis van teksten die origineel in het Nederlands geschreven zijn door een persoon. Dit zijn de Nederlandse wiki q en a en Nederlandse schoolboeken q en a chat datasets. Hierdoor wordt er zoveel mogelijk voor gezorgd dat bij bijvoorbeeld educatie-achtige q en a, de in onze regio gebruikelijke termen en eenheden voorkomen in de chat database, tenminste voor de Nederlandstalige chats. Bij alle chat datasets is er alleen getraind op de assistant-completion tokens. ## Fine-tuning * Boreas was fine-tuned with the [EasyDeL JAX framework](https://github.com/erfanzar/EasyDel) on a tpu-v4-32 kindly supplied by the Google [TPU Research Cloud](https://sites.research.google/trc/about/). * Batch size 96, gradient accumulation 2, * Using flash attention, block size of 512 * Max sequence length of 2048 * LION optimizer, triangle learning rate schedule with max lr 2e-6, gradient clipping to 1.0 (NB: the schedule was not finished due to an error at the end of the dataset epoch. Since the loss had plateaued I decided then to not resume for another epoch) ![loss finetune](images/loss_finetune.png) ![accuracy finetune](images/accuracy_finetune.png) ![learning rate finetune](images/lr_finetune.png) <!-- [https://wandb.ai/yepster/EasyDeL-MistralBoreas/runs/ynkl2jtx?nw=nwuseryepster](WandB Boreas 7B chat finetune) -->
team-sanai/llama2_7B_pretrain
team-sanai
2024-05-01T16:31:52Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T15:51:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF
bartowski
2024-05-01T16:31:01Z
312
4
null
[ "gguf", "llama3", "comedy", "comedian", "fun", "funny", "llama38b", "laugh", "sarcasm", "roleplay", "text-generation", "en", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-05-01T16:13:51Z
--- license: other license_name: llama3 license_link: https://llama.meta.com/llama3/license/ language: - en tags: - llama3 - comedy - comedian - fun - funny - llama38b - laugh - sarcasm - roleplay quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of Llama-3-8B-LexiFun-Uncensored-V1 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization. Original model: https://huggingface.co/Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|end_of_text|><|start_header_id|>user<|end_header_id|> {prompt}<|end_of_text|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Llama-3-8B-LexiFun-Uncensored-V1-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
ariel-ml/SambaLingo-Hungarian-Chat-GGUF
ariel-ml
2024-05-01T16:25:06Z
22
2
null
[ "gguf", "text-generation", "hu", "en", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:HuggingFaceH4/ultrafeedback_binarized", "dataset:HuggingFaceH4/cai-conversation-harmless", "arxiv:2404.05829", "license:llama2", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-04-25T08:45:55Z
--- license: llama2 datasets: - HuggingFaceH4/ultrachat_200k - HuggingFaceH4/ultrafeedback_binarized - HuggingFaceH4/cai-conversation-harmless language: - hu - en pipeline_tag: text-generation --- # SambaLingo-Hungarian-Chat <img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <!-- Provide a quick summary of what the model is/does. --> SambaLingo-Hungarian-Chat is a human aligned chat model trained in Hungarian and English. It is trained using direct preference optimization on top the base model [SambaLingo-Hungarian-Base](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Base). The base model adapts [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) to Hungarian by training on 59 billion tokens from the Hungarian split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space). ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [SambaNova Systems](https://sambanova.ai/) - **Model type:** Language Model - **Language(s):** Hungarian, English - **Finetuned from model:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) - **Try This Model:** [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space) - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829) - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts) ## Getting Started ### Loading Model With Hugging Face Please make sure to set use_fast=False when loading the tokenizer. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Hungarian-Chat", use_fast=False) model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Hungarian-Chat", device_map="auto", torch_dtype="auto") ``` ### Interacting With Model Pipeline Please make sure to set use_fast=False when loading the tokenizer. ```python from transformers import pipeline pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Hungarian-Chat", device_map="auto", use_fast=False) messages = [ {"role": "user", "content": {YOUR_QUESTION}}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt)[0] outputs = outputs["generated_text"] ``` ### Suggested Inference Parameters - Temperature: 0.8 - Repetition penalty: 1.0 - Top-p: 0.9 ### Prompting Guidelines To prompt this model, please use the following chat template: ``` <|user|>\n{question}</s>\n<|assistant|>\n ``` ### Example Prompts and Generations ``` <|user|> Mi a jelentőssége a magyar szürkemarhának?</s> <|assistant|> A magyar szürkemarha jelentős kulturális és gazdasági jelentőséggel bír Magyarország számára. Íme néhány ok, amiért: 1. Kulturális örökség: A magyar szürkemarha a magyar kultúra és örökség szimbóluma. A fajta évszázadok óta létezik, és a magyar puszta ikonikus szimbólumává vált. 2. Élelmiszer: A magyar szürkemarha húsa és teje kiváló minőségű, és keresett a magyar és a nemzetközi piacokon. A hús sovány, zsírszegény, és magas a fehérje- és vastartalma. 3. Turizmus: A magyar szürkemarha fontos turisztikai attrakció Magyarországon. A látogatók szívesen látogatják a szürkemarha farmokat, és megismerkednek a fajta történetével és tenyésztésével. 4. Génmegőrzés: A magyar szürkemarha fontos génmegőrző faj, amely segít megőrizni a hagyományos gazdálkodási gyakorlatokat és a helyi ökoszisztémákat. 5. Gazdasági hatás: A magyar szürkemarha fontos gazdasági tevékenység Magyarország számára. A fajta tenyésztése és feldolgozása munkahelyeket teremt, és bevételt generál a gazdálkodók és a feldolgozók számára. Összességében a magyar szürkemarha jelentős kulturális, gazdasági és ökológiai jelentőséggel bír Magyarország számára. A fajta megőrzése és népszerűsítése fontos a magyar kultúra és örökség, valamint a helyi gazdaság és ökoszisztémák szempontjából. ``` ## Training Details The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO). The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup. The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO. ## Tokenizer Details We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language. ## Evaluation For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> SambaLingo should NOT be used for: - Mission-critical applications - Applications that involve the safety of others - Making highly important decisions ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Like all LLMs, SambaLingo has certain limitations: - Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information. - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output. - Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses. - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited. - Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content. ## Acknowledgments We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative. We would like to give a special thanks to the following groups: - Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset - Nguyen et al for open sourcing CulturaX dataset - CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset - EleutherAI for their open source evaluation framework - Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo ## Cite SambaLingo ``` @misc{csaki2024sambalingo, title={SambaLingo: Teaching Large Language Models New Languages}, author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker}, year={2024}, eprint={2404.05829}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
evonc/ppo-LunarLander-v2
evonc
2024-05-01T16:22:38Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-01T16:22:16Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.97 +/- 19.34 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
saransh03sharma/mintrec2-llama-2-7b-150
saransh03sharma
2024-05-01T16:15:22Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T16:10:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
moetezsa/Llama3_on_scigen_v2
moetezsa
2024-05-01T16:05:26Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-bnb-4bit", "license:llama2", "region:us" ]
null
2024-05-01T14:31:16Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer base_model: unsloth/llama-3-8b-bnb-4bit model-index: - name: Llama3_on_scigen_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama3_on_scigen_v2 This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 30 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.19.1
WasamiKirua/phi2-haiku-ita-v0.2
WasamiKirua
2024-05-01T16:01:38Z
5
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "haiku", "japan", "synthetic", "conversational", "custom_code", "it", "dataset:WasamiKirua/haiku-phi2-ita-v0.2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-28T13:34:27Z
--- language: - it license: mit tags: - haiku - japan - synthetic task_categories: - text-generation datasets: - WasamiKirua/haiku-phi2-ita-v0.2 pipeline_tag: text-generation --- # Phi2 Haiku Writer in Italian language v0.2 Prompt from the generation inference: ``` Narration: In a land of ancient wisdom and profound teachings, Suspended moon,in the autumn darkness,infinite peace ``` <img src="https://i.postimg.cc/htjL4cK7/Default-Narration-In-a-land-of-ancient-wisdom-and-profound-tea-0.jpg" alt="cover" border="0" width="1024px"> The v0.2 is the result of a huge dataset improvement and enrichment process. A DPO version will follow soon ! ## Wandb train/loss - eval/loss <img src="https://i.postimg.cc/8z0hj9CX/Screenshot-from-2024-04-28-15-41-02.png" alt="wandb" border="0" width="1024px"> ## Inference example generation <img src="https://i.postimg.cc/QtL1q71p/Screenshot-from-2024-04-28-15-42-02.png" alt="generation" border="0" width="1024px"> ## Prompt format ``` <|im_start|>system YOUR PROMPT<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Haiku Generation Prompt Example The following system prompt can be used/modified: ``` Crea un haiku in italiano che segua queste linee guida specifiche: Struttura e Metrica: Componi un poema breve di tre versi. Cerca di avvicinarti a una struttura metrica di 5-7-5 sillabe per verso,ma sentiti libero di adattare leggermente il conteggio delle sillabe per mantenere l'armonia e la naturalità del linguaggio italiano. Elemento Stagionale (Kigo): Includi nel tuo haiku un riferimento chiaro a una delle quattro stagioni (primavera, estate, autunno, inverno). Questo può essere fatto attraverso l'uso di immagini naturali, parole o concetti che evocano specificamente quel periodo dell'anno. Taglio (Kireji): Usa una forma di pausa, come la punteggiatura (virgola, punto e virgola, punto) o un cambio di immagine o tono tra i versi, per creare un momento di riflessione o sorpresa. Questa pausa dovrebbe servire a dividere il poema in due parti che, pur essendo distinte, rimangono connesse in significato o emozione. Semplicità ed Essenzialità: Concentrati su immagini e concetti semplici, preferibilmente legati alla natura o a momenti quotidiani, che rivelino qualcosa di più profondo sulla condizione umana, sulla natura o sulla spiritualità. Ogni parola deve essere scelta con cura per la sua capacità di evocare immagini vivide e significati ricchi. Evita Rime e Metafore Complesse: Mantieni il linguaggio diretto e privo di rime forzate o di complesse figure retoriche. L'haiku dovrebbe preferire la chiarezza e l'immediatezza, con un focus sulla potenza evocativa delle immagini naturali e quotidiane. Momento Istantaneo: Cerca di catturare l'essenza di un attimo fugace, offrendo una visione o un'osservazione che, pur nella sua brevità, apre a riflessioni più ampie o universali. Originalità e Personalità: Lascia che la tua voce unica traspaia nell'haiku, esplorando temi, immagini e emozioni che ti sono personali o che ti colpiscono particolarmente. Ricorda che, nonostante le regole, l'haiku è un'espressione artistica soggettiva e personale. ```
sreddy109/m3-test
sreddy109
2024-05-01T16:00:19Z
9
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-01T15:42:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ctdunn/ppo-LunarLander-v2
Ctdunn
2024-05-01T15:56:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-01T15:56:13Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 238.97 +/- 18.44 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mzbac/llama-3-8B-Instruct-function-calling-v0.2
mzbac
2024-05-01T15:55:43Z
230
21
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:mzbac/function-calling-llama-3-format-v1.1", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-28T08:28:41Z
--- license: llama3 datasets: - mzbac/function-calling-llama-3-format-v1.1 language: - en --- # Model This model has been fine-tuned based on Meta-Llama/Meta-Llama-3-8B-Instruct using the mlx-lm with a cleaned-up function calling dataset that removed invalid JSON data and single quotes around argument values. ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "mzbac/llama-3-8B-Instruct-function-calling-v0.2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) tool = { "name": "search_web", "description": "Perform a web search for a given search terms.", "parameter": { "type": "object", "properties": { "search_terms": { "type": "array", "items": {"type": "string"}, "description": "The search queries for which the search is performed.", "required": True, } } }, } messages = [ { "role": "system", "content": f"You are a helpful assistant with access to the following functions. Use them if required - {str(tool)}", }, {"role": "user", "content": "Today's news in Melbourne, just for your information, today is April 27, 2014."}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.1, ) response = outputs[0] print(tokenizer.decode(response)) # <|begin_of_text|><|start_header_id|>system<|end_header_id|> # You are a helpful assistant with access to the following functions. Use them if required - {'name':'search_web', 'description': 'Perform a web search for a given search terms.', 'parameter': {'type': 'object', 'properties': {'search_terms': {'type': 'array', 'items': {'type':'string'}, 'description': 'The search queries for which the search is performed.','required': True}}}}<|eot_id|><|start_header_id|>user<|end_header_id|> # Today's news in Melbourne, just for your information, today is April 27, 2014.<|eot_id|><|start_header_id|>assistant<|end_header_id|> # <functioncall> {"name": "search_web", "arguments": {"search_terms": ["Melbourne news", "April 27, 2014"]}}<|eot_id|> ```
JenniferHJF/Flant5-offensive-multilingual
JenniferHJF
2024-05-01T15:55:21Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-04-24T07:04:36Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: Flant5-offensive-multilingual results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Flant5-offensive-multilingual This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0012 - Precision: 0.6875 - Recall: 0.6040 - F1: 0.6430 - Total Predictions: 3532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Total Predictions | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:-----------------:| | 0.2343 | 1.0 | 3753 | 0.0011 | 0.5924 | 0.6481 | 0.6190 | 3532 | | 0.0008 | 2.0 | 7506 | 0.0010 | 0.6903 | 0.5416 | 0.6070 | 3532 | | 0.0006 | 3.0 | 11259 | 0.0011 | 0.6012 | 0.7238 | 0.6569 | 3532 | | 0.0005 | 4.0 | 15012 | 0.0011 | 0.6882 | 0.5765 | 0.6274 | 3532 | | 0.0004 | 5.0 | 18765 | 0.0012 | 0.6875 | 0.6040 | 0.6430 | 3532 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.0.0+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_3
ShenaoZ
2024-05-01T15:55:18Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2", "base_model:finetune:ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T14:53:34Z
--- license: mit base_model: ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2 tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.00001_withdpo_4iters_bs256_531lr_iter_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.00001_withdpo_4iters_bs256_531lr_iter_3 This model is a fine-tuned version of [ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2](https://huggingface.co/ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
k1101jh/dqn-SpaceInvadersNoFrameskip-v4
k1101jh
2024-05-01T15:52:00Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-01T15:51:20Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 820.50 +/- 397.27 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga k1101jh -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga k1101jh -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga k1101jh ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
salangarica/run
salangarica
2024-05-01T15:43:42Z
7
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:salangarica/BioMistral-LLM", "base_model:adapter:salangarica/BioMistral-LLM", "region:us" ]
null
2024-04-26T18:59:08Z
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: salangarica/BioMistral-LLM model-index: - name: run results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # run This model is a fine-tuned version of [salangarica/BioMistral-LLM](https://huggingface.co/salangarica/BioMistral-LLM) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3032 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 20.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.131 | 1.0 | 110 | 0.1747 | | 0.1903 | 2.0 | 220 | 0.1724 | | 0.0928 | 3.0 | 330 | 0.2107 | | 0.0738 | 4.0 | 440 | 0.2131 | | 0.0735 | 5.0 | 550 | 0.3032 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.1 - Pytorch 2.0.0 - Datasets 2.15.0 - Tokenizers 0.15.2
AprendeIngenia/control_de_acceso_facial_con_ia
AprendeIngenia
2024-05-01T15:42:53Z
0
2
null
[ "region:us" ]
null
2024-05-01T13:13:09Z
# control-de-acceso-facial-con-ia Hola, chicos en este repositorio encontrarán la programación para que puedan crear su sistema de control de acceso con reconocimiento facial, utilizando inteligencia artificial. ### Conceptos introductorios: - Este repositorio contiene el código fuente en Python para ejecutar y utilizar nuestro sistema de control de acceso inteligente, utilizando visión por computadora e inteligencia artificial. - Para iniciar recomendamos ver algunos conceptos introductorios con el fin de entender un poco mejor todo el funcionamiento, por eso te dejamos la explicacion en este [video.](https://youtu.be/jxiCDufWop8?si=gtu70gDS1swRXZRB) - Los modelos lo puedes encontrar [aqui.](https://huggingface.co/AprendeIngenia/control_de_acceso_facial_con_ia/tree/main) ![3D](https://github.com/AprendeIngenia/control-de-acceso-facial-con-ia/assets/85022752/6f8e7705-d33e-47b9-a6b4-29189b38496b) ### Instalacion: Para utilizar este código, asegúrese de cumplir con los siguientes requisitos previos: - Sistema operativo compatible: Windows, Linux o macOS - Versión de Python: 3.10 - Paquetes adicionales: NumPy, OpenCV, TensorFlo, etc. Consulte el archivo [requirements.txt](https://huggingface.co/AprendeIngenia/control_de_acceso_facial_con_ia/blob/main/requirements.txt) para ver la lista completa de dependencias. ### Contacto Si tiene preguntas o consultas relacionadas con este proyecto, no dude en contactarnos en nuestro canal de Youtube [Aprende e Ingenia](https://www.youtube.com/@AprendeIngenia/videos). Le responderemos tan pronto como nos sea posible. Gracias por visitar nuestro repositorio y esperamos que disfrute trabajando con nuestro codigo. :smile: # Recuerda que puedes contribuir a que siga desarrollando: Simplemente suscribiendote a mi canal de YouTube: - [Canal YouTube](https://www.youtube.com/channel/UCzwHEOCbsZLjfELperJ6VeQ/videos) ### Siguiendome en mis redes sociales: - [Instagram](https://www.instagram.com/santiagsanchezr/) - [Twitter](https://twitter.com/SantiagSanchezR)
ai-maker-space/gen-z-translate-llama-3-instruct-v1
ai-maker-space
2024-05-01T15:39:40Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T15:33:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kirillgoltsman/dreambooth
kirillgoltsman
2024-05-01T15:36:42Z
27
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-01T15:11:37Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - kirillgoltsman/dreambooth This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Vivekg91/code-llama-7b-text-to-sql
Vivekg91
2024-05-01T15:36:26Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-05-01T14:55:50Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer base_model: codellama/CodeLlama-7b-hf datasets: - generator model-index: - name: code-llama-7b-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-text-to-sql This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
philipphager/baidu-ultr_uva-bert_dla
philipphager
2024-05-01T15:35:58Z
7
0
transformers
[ "transformers", "safetensors", "bert", "dataset:philipphager/baidu-ultr-pretrain", "dataset:philipphager/baidu-ultr_uva-mlm-ctr", "arxiv:2207.03051", "arxiv:1804.05938", "arxiv:2404.02543", "license:mit", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-24T15:27:05Z
--- license: mit datasets: - philipphager/baidu-ultr-pretrain - philipphager/baidu-ultr_uva-mlm-ctr metrics: - log-likelihood - dcg@1 - dcg@3 - dcg@5 - dcg@10 - ndcg@10 - mrr@10 co2_eq_emissions: emissions: 2090 source: "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france" training_type: "Pre-training" geographical_location: "Grenoble, France" hardware_used: "4 NVIDIA H100-80GB GPUs" --- # Listwise MonoBERT trained on Baidu-ULTR using the Dual Learning Algorithm (DLA) A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with a **listwise DLA objective on clicks**. Following [Ai et al.](https://arxiv.org/abs/1804.05938), the dual learning algorithm jointly infers item relevance (using a BERT model) and position bias (in our case, a single embedding parameter per rank), both by optimizing a **listwise softmax cross-entropy loss**. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model). ## Test Results on Baidu-ULTR Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries). | Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 | |------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------| | [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 | | [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 | | [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 | | [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 | | [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 | | [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 | ## Usage Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository. ```Python import jax.numpy as jnp from src.model import DLACrossEncoder model = DLACrossEncoder.from_pretrained( "philipphager/baidu-ultr_uva-bert_dla", ) # Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens batch = { # Query_id for each document "query_id": jnp.array([1, 1, 1, 1]), # Document position in SERP "positions": jnp.array([1, 2, 3, 4]), # Token ids for: [CLS] Query [SEP] Document "tokens": jnp.array([ [2, 21448, 21874, 21436, 1, 20206, 4012, 2860], [2, 21448, 21874, 21436, 1, 16794, 4522, 2082], [2, 21448, 21874, 21436, 1, 20206, 10082, 9773], [2, 21448, 21874, 21436, 1, 2618, 8520, 2860], ]), # Specify if a token id belongs to the query (0) or document (1) "token_types": jnp.array([ [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], ]), # Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False): "attention_mask": jnp.array([ [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], ]), } outputs = model(batch, train=False) print(outputs) ``` ## Reference ``` @inproceedings{Hager2024BaiduULTR, author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke}, title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)}, organization = {ACM}, year = {2024}, } ```
philipphager/baidu-ultr_uva-bert_ips-listwise
philipphager
2024-05-01T15:35:49Z
10
0
transformers
[ "transformers", "safetensors", "bert", "dataset:philipphager/baidu-ultr-pretrain", "dataset:philipphager/baidu-ultr_uva-mlm-ctr", "arxiv:2207.03051", "arxiv:1804.05938", "arxiv:2404.02543", "license:mit", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:53:15Z
--- license: mit datasets: - philipphager/baidu-ultr-pretrain - philipphager/baidu-ultr_uva-mlm-ctr metrics: - log-likelihood - dcg@1 - dcg@3 - dcg@5 - dcg@10 - ndcg@10 - mrr@10 co2_eq_emissions: emissions: 2090 source: "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france" training_type: "Pre-training" geographical_location: "Grenoble, France" hardware_used: "4 NVIDIA H100-80GB GPUs" --- # Listwise MonoBERT trained on Baidu-ULTR with Inverse Propensity Scoring (IPS) A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with a **listwise softmax cross-entropy loss with IPS correction** adopted based on the work by [Ai et al](https://arxiv.org/abs/1804.05938). The loss uses inverse propensity scoring to mitigate position bias in click data by weighting clicks on items higher that are less likely to be observed by users. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model). ## Test Results on Baidu-ULTR Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries). | Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 | |------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------| | [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 | | [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 | | [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 | | [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 | | [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 | | [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 | ## Usage Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository. ```Python import jax.numpy as jnp from src.model import ListwiseIPSCrossEncoder model = ListwiseIPSCrossEncoder.from_pretrained( "philipphager/baidu-ultr_uva-bert_ips-listwise", ) # Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens batch = { # Query_id for each document "query_id": jnp.array([1, 1, 1, 1]), # Document position in SERP "positions": jnp.array([1, 2, 3, 4]), # Token ids for: [CLS] Query [SEP] Document "tokens": jnp.array([ [2, 21448, 21874, 21436, 1, 20206, 4012, 2860], [2, 21448, 21874, 21436, 1, 16794, 4522, 2082], [2, 21448, 21874, 21436, 1, 20206, 10082, 9773], [2, 21448, 21874, 21436, 1, 2618, 8520, 2860], ]), # Specify if a token id belongs to the query (0) or document (1) "token_types": jnp.array([ [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], ]), # Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False): "attention_mask": jnp.array([ [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], ]), } outputs = model(batch, train=False) print(outputs) ``` ## Reference ``` @inproceedings{Hager2024BaiduULTR, author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke}, title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)}, organization = {ACM}, year = {2024}, } ```
philipphager/baidu-ultr_uva-bert_naive-listwise
philipphager
2024-05-01T15:35:42Z
6
1
transformers
[ "transformers", "safetensors", "bert", "dataset:philipphager/baidu-ultr-pretrain", "dataset:philipphager/baidu-ultr_uva-mlm-ctr", "arxiv:2207.03051", "arxiv:2404.02543", "license:mit", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-24T10:10:10Z
--- license: mit datasets: - philipphager/baidu-ultr-pretrain - philipphager/baidu-ultr_uva-mlm-ctr metrics: - log-likelihood - dcg@1 - dcg@3 - dcg@5 - dcg@10 - ndcg@10 - mrr@10 co2_eq_emissions: emissions: 2090 source: "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france" training_type: "Pre-training" geographical_location: "Grenoble, France" hardware_used: "4 NVIDIA H100-80GB GPUs" --- # Naive Listwise MonoBERT trained on Baidu-ULTR A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with a **listwise softmax cross-entropy loss on clicks**. The loss is called "naive" as we use user clicks as a signal of relevance without any additional position bias correction. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model). ## Test Results on Baidu-ULTR Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries). | Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 | |------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------| | [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 | | [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 | | [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 | | [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 | | [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 | | [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 | ## Usage Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository. ```Python import jax.numpy as jnp from src.model import ListwiseCrossEncoder model = ListwiseCrossEncoder.from_pretrained( "philipphager/baidu-ultr_uva-bert_naive-listwise", ) # Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens batch = { # Query_id for each document "query_id": jnp.array([1, 1, 1, 1]), # Document position in SERP "positions": jnp.array([1, 2, 3, 4]), # Token ids for: [CLS] Query [SEP] Document "tokens": jnp.array([ [2, 21448, 21874, 21436, 1, 20206, 4012, 2860], [2, 21448, 21874, 21436, 1, 16794, 4522, 2082], [2, 21448, 21874, 21436, 1, 20206, 10082, 9773], [2, 21448, 21874, 21436, 1, 2618, 8520, 2860], ]), # Specify if a token id belongs to the query (0) or document (1) "token_types": jnp.array([ [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], ]), # Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False): "attention_mask": jnp.array([ [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], ]), } outputs = model(batch, train=False) print(outputs) ``` ## Reference ``` @inproceedings{Hager2024BaiduULTR, author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke}, title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)}, organization = {ACM}, year = {2024}, } ```
philipphager/baidu-ultr_uva-bert_ips-pointwise
philipphager
2024-05-01T15:35:31Z
6
0
transformers
[ "transformers", "safetensors", "bert", "dataset:philipphager/baidu-ultr-pretrain", "dataset:philipphager/baidu-ultr_uva-mlm-ctr", "arxiv:2207.03051", "arxiv:1809.03207", "arxiv:1909.03601", "arxiv:2404.02543", "license:mit", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:51:04Z
--- license: mit datasets: - philipphager/baidu-ultr-pretrain - philipphager/baidu-ultr_uva-mlm-ctr metrics: - log-likelihood - dcg@1 - dcg@3 - dcg@5 - dcg@10 - ndcg@10 - mrr@10 co2_eq_emissions: emissions: 2090 source: "Calculated using the [ML CO2 impact calculator](https://mlco2.github.io/impact/#compute), training for 4 x 45 hours with a carbon efficiency of 0.029 kg/kWh. You can inspect the carbon efficiency of the French national grid provider here: https://www.rte-france.com/eco2mix/les-emissions-de-co2-par-kwh-produit-en-france" training_type: "Pre-training" geographical_location: "Grenoble, France" hardware_used: "4 NVIDIA H100-80GB GPUs" --- # Pointwise MonoBERT trained on Baidu-ULTR with Inverse Propensity Scoring (IPS) A flax-based MonoBERT cross encoder trained on the [Baidu-ULTR](https://arxiv.org/abs/2207.03051) dataset with the **pointwise sigmoid cross-entropy loss with IPS correction** suggested by [Bekker et al.](https://arxiv.org/abs/1809.03207) and [Saito et al.](https://arxiv.org/abs/1909.03601). The loss uses inverse propensity scoring to mitigate position bias in click data by weighting clicks on items higher that are less likely to be observed by users. For more info, [read our paper](https://arxiv.org/abs/2404.02543) and [find the code for this model here](https://github.com/philipphager/baidu-bert-model). ## Test Results on Baidu-ULTR Ranking performance is measured in DCG, nDCG, and MRR on expert annotations (6,985 queries). Click prediction performance is measured in log-likelihood on one test partition of user clicks (≈297k queries). | Model | Log-likelihood | DCG@1 | DCG@3 | DCG@5 | DCG@10 | nDCG@10 | MRR@10 | |------------------------------------------------------------------------------------------------|----------------|-------|-------|-------|--------|---------|--------| | [Pointwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise) | 0.227 | 1.641 | 3.462 | 4.752 | 7.251 | 0.357 | 0.609 | | [Pointwise Two-Tower](https://huggingface.co/philipphager/baidu-ultr_uva-bert_twotower) | 0.218 | 1.629 | 3.471 | 4.822 | 7.456 | 0.367 | 0.607 | | [Pointwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-pointwise) | 0.222 | 1.295 | 2.811 | 3.977 | 6.296 | 0.307 | 0.534 | | [Listwise Naive](https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-listwise) | - | 1.947 | 4.108 | 5.614 | 8.478 | 0.405 | 0.639 | | [Listwise IPS](https://huggingface.co/philipphager/baidu-ultr_uva-bert_ips-listwise) | - | 1.671 | 3.530 | 4.873 | 7.450 | 0.361 | 0.603 | | [Listwise DLA](https://huggingface.co/philipphager/baidu-ultr_uva-bert_dla) | - | 1.796 | 3.730 | 5.125 | 7.802 | 0.377 | 0.615 | ## Usage Here is an example of downloading the model and calling it for inference on a mock batch of input data. For more details on how to use the model on the Baidu-ULTR dataset, take a look at our [training](https://github.com/philipphager/baidu-bert-model/blob/main/main.py) and [evaluation scripts](https://github.com/philipphager/baidu-bert-model/blob/main/eval.py) in our code repository. ```Python import jax.numpy as jnp from src.model import IPSCrossEncoder model = IPSCrossEncoder.from_pretrained( "philipphager/baidu-ultr_uva-bert_ips-pointwise", ) # Mock batch following Baidu-ULTR with 4 documents, each with 8 tokens batch = { # Query_id for each document "query_id": jnp.array([1, 1, 1, 1]), # Document position in SERP "positions": jnp.array([1, 2, 3, 4]), # Token ids for: [CLS] Query [SEP] Document "tokens": jnp.array([ [2, 21448, 21874, 21436, 1, 20206, 4012, 2860], [2, 21448, 21874, 21436, 1, 16794, 4522, 2082], [2, 21448, 21874, 21436, 1, 20206, 10082, 9773], [2, 21448, 21874, 21436, 1, 2618, 8520, 2860], ]), # Specify if a token id belongs to the query (0) or document (1) "token_types": jnp.array([ [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], ]), # Marks if a token should be attended to (True) or ignored, e.g., padding tokens (False): "attention_mask": jnp.array([ [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], [True, True, True, True, True, True, True, True], ]), } outputs = model(batch, train=False) print(outputs) ``` ## Reference ``` @inproceedings{Hager2024BaiduULTR, author = {Philipp Hager and Romain Deffayet and Jean-Michel Renders and Onno Zoeter and Maarten de Rijke}, title = {Unbiased Learning to Rank Meets Reality: Lessons from Baidu’s Large-Scale Search Dataset}, booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR`24)}, organization = {ACM}, year = {2024}, } ```
llm-wizard/gen-z-translate-llama-3-instruct-v1
llm-wizard
2024-05-01T15:33:32Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-05-01T15:32:09Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: gen-z-translate-llama-3-instruct-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gen-z-translate-llama-3-instruct-v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
hams2/bertclass
hams2
2024-05-01T15:25:57Z
7
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-01T15:25:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
joaohonorato/PLN_TS
joaohonorato
2024-05-01T15:25:46Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2-medium", "base_model:finetune:openai-community/gpt2-medium", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T14:54:30Z
--- license: mit base_model: openai-community/gpt2-medium tags: - generated_from_trainer model-index: - name: PLN_TS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PLN_TS This model is a fine-tuned version of [openai-community/gpt2-medium](https://huggingface.co/openai-community/gpt2-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 10.7341 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 91 | 4.6308 | | No log | 2.0 | 182 | 5.1050 | | No log | 3.0 | 273 | 5.5102 | | No log | 4.0 | 364 | 6.2532 | | No log | 5.0 | 455 | 6.6069 | | 1.1628 | 6.0 | 546 | 7.0238 | | 1.1628 | 7.0 | 637 | 7.1553 | | 1.1628 | 8.0 | 728 | 7.7253 | | 1.1628 | 9.0 | 819 | 8.2397 | | 1.1628 | 10.0 | 910 | 8.9225 | | 0.1611 | 11.0 | 1001 | 9.3999 | | 0.1611 | 12.0 | 1092 | 9.8062 | | 0.1611 | 13.0 | 1183 | 10.1804 | | 0.1611 | 14.0 | 1274 | 10.5743 | | 0.1611 | 15.0 | 1365 | 10.7341 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
hams2/giveup
hams2
2024-05-01T15:25:17Z
4
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T15:25:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xp0tat0/farmer_1
xp0tat0
2024-05-01T15:25:07Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-25T08:08:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hams2/split2
hams2
2024-05-01T15:24:58Z
4
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T15:24:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hams2/split1
hams2
2024-05-01T15:24:23Z
4
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T15:24:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ai-maker-space/gen-z-translate-llama-3-instruct
ai-maker-space
2024-05-01T15:24:06Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-01T15:17:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mansee/dinov2-base-finetuned-oxford
mansee
2024-05-01T15:21:55Z
30
0
transformers
[ "transformers", "tensorboard", "safetensors", "dinov2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/dinov2-base", "base_model:finetune:facebook/dinov2-base", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-04-29T07:32:03Z
--- license: apache-2.0 base_model: facebook/dinov2-base tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: dinov2-base-finetuned-oxford results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.9299317110705011 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dinov2-base-finetuned-oxford This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2714 - Accuracy: 0.9299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 9 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6782 | 1.0 | 15460 | 0.4996 | 0.8013 | | 0.4513 | 2.0 | 30920 | 0.3186 | 0.8837 | | 0.2692 | 3.0 | 46380 | 0.2714 | 0.9299 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.0.1+cu117 - Datasets 2.18.0 - Tokenizers 0.19.1