instinct / README.md
burtenshaw's picture
burtenshaw HF Staff
add model metadata
b893c3c verified
|
raw
history blame
1.49 kB
metadata
license: apache-2.0
datasets:
  - continuedev/instinct-data
base_model:
  - Qwen/Qwen2.5-Coder-7B
pipeline_tag: text-generation
library_name: transformers

Instinct, the State-of-the-Art Open Next Edit Model

This repo contains the model weights for Continue's state-of-the-art open Next Edit model, Instinct. Robustly fine-tuned from Qwen2.5-Coder-7B on our dataset of real-world code edits, Instinct intelligently predicts your next move to keep you in flow.

Serving the model

Ollama: We've released a Q4_K_M GGUF quantization of Instinct for efficient local inference. Try it with Continue's Ollama integration, or just run ollama run nate/instinct.

You can also serve the model using either of the below options, then connect it with Continue.

SGLang: python3 -m sglang.launch_server --model-path continuedev/instinct --load-format safetensors
vLLM: vllm serve continuedev/instinct --served-model-name instinct --load-format safetensors

Learn more

For more information on the work behind Instinct, please refer to our blog.