--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss license: apache-2.0 language: - en new_version: EpistemeAI/metatune-gpt20b-R1.1 --- ## Model Card ### We release open-weight metatune-gpt20b, fine tuned version of OpenAI's gpt-oss-20b model, this is one of the first public release recursive self improving AI. - Generates new data for itself, - Evaluates its performance, and - Adjusts its own hyperparameters based on improvement metrics. ## Use cases: - genuinely demonstrate scientific and mathematical understanding at a postdoctoral level. - coding - - Topics: Euler–Lagrange equation, vector calculus, statistical mechanics ## Guardrails: - generally, please set reasoning = "high", it will usually prevent jailbreaking and prompt injection - use safety gpt oss 20b for guardrails before this model: [openai/gpt-oss-safeguard-20b](https://huggingface.co/openai/gpt-oss-safeguard-20b) # Inference examples ## Transformers You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. To get started, install the necessary dependencies to setup your environment: ``` pip install -U transformers kernels torch ``` For Google Colab (free/Pro) ``` !pip install -q --upgrade torch !pip install -q transformers triton==3.4 kernels !pip uninstall -q torchvision torchaudio -y ``` Once, setup you can proceed to run the model by running the snippet below: ```py from transformers import pipeline import torch model_id = "EpistemeAI/metatune-gpt20b-R1" pipe = pipeline( "text-generation", model=model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "user", "content": "Derive the Euler–Lagrange equation from the principle of stationary action.""}, ] outputs = pipe( messages, max_new_tokens=3000, ) print(outputs[0]["generated_text"][-1]) ``` # Reasoning levels You can adjust the reasoning level that suits your task across three levels: * **Low:** Fast responses for general dialogue. * **Medium:** Balanced speed and detail. * **High:** Deep and detailed analysis. The reasoning level can be set in the system prompts, e.g., "Reasoning: high". # Tool use The gpt-oss models are excellent for: * Web browsing (using built-in browsing tools) * Function calling with defined schemas * Agentic operations like browser tasks # Fine-tuning Both gpt-oss models can be fine-tuned for a variety of specialized use cases. This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node. # Benchmark | Tasks |metatune R0|metatune R1|Llama 4 Maverick| |:-----------------------|:-----|:-----|:----- | |gsm8k_cot |0.91 |0.9796| - | |gpqa_diamond_cot_n_shot |0.722 | | - | |hellaswag |0.421 |**0.525**|- | |arc_challenge |0.349 | 0.349|- | |winogrande |**0.7851**| 0.5928|- | # Inspiration: Jurgen Schmidhuber # Uploaded finetuned model - **Developed by:** EpistemeAI - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)