--- license: apache-2.0 datasets: - Leon-Leee/wizardlm_evol_instruct_v2_196K_backuped - m-a-p/Code-Feedback - openbmb/UltraInteract_sft - ise-uiuc/Magicoder-Evol-Instruct-110K language: - en metrics: - code_eval library_name: transformers tags: - code --- ## AIGCodeGeek-DS-6.7B ### Introduction AIGCodeGeek-DS-6.7B is the first version of our Code-LLM family with competitive performance on benchmarks such as HumanEval(+) and MBPP(+). It gains a lot of insights from the open-source community and we deeply appreciate all of these great works. We are preparing for the tech report, so stay tuned for more details. ### Model Details #### Model Description - Developed by: [Leon Li](https://huggingface.co/Leon-Leee) - License: [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL) - Fine-tuned from [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) with full parameters ### Training data A mixture of both - samples from several high-quality open-source datasets (read *Acknowledgements*), - our private datasets (already decontaminated with benchmarks). ### Evaluation To check out our evaluation results: [EvalPlus](https://evalplus.github.io/leaderboard.html) ### Requirements It should work with the same requirements as DeepSeek-Coder-6.7B ```torch>=2.0 tokenizers>=0.14.0 transformers>=4.35.0 accelerate sympy>=1.12 pebble timeout-decorator attrdict ``` ### QuickStart TBD ``` ``` ### Limits ### Acknowledgements - [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): WizardLM-Evol-Instruct V2 datasets - We used a back-up([Leon-Leee/wizardlm_evol_instruct_v2_196K_backuped](https://huggingface.co/datasets/Leon-Leee/wizardlm_evol_instruct_v2_196K_backuped)) since this dataset has been deleted. - [Magicoder](https://github.com/ise-uiuc/magicoder/): [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder-Evol-Instruct-110K) from theblackcat102/evol-codealpaca-v1(https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) - [Eurus](https://github.com/OpenBMB/Eurus): reasoning enhancement dataset of [openbmb/UltraInteract_sft](https://huggingface.co/datasets/openbmb/UltraInteract_sft) - [OpenCoderInterpreter](https://opencodeinterpreter.github.io/): [m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback)