NT-Java-1.1B / README.md
rajab7's picture
Update README.md
19c1733 verified
|
raw
history blame
2.39 kB
metadata
license: apache-2.0
language:
  - en
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
  - text: |-
      public class HelloWorld {
          public static void main(String[] args) {
    example_title: Hello world
    group: Java

JavaCoder

Table of Contents

  1. Model Summary
  2. Use
  3. Limitations
  4. Training
  5. License
  6. Citation

Model Summary

The JavaCoder models are !B parameter models trained on 80+ programming languages from The Stack (v1.2), with opt-out requests excluded. The model uses Multi Query Attention, a context window of 8192 tokens, and was trained using the Fill-in-the-Middle objective on 1 trillion tokens.

  • Repository:
  • Project Website:
  • Paper:
  • Point of Contact:
  • Languages: 80+ Programming languages

Use

Intended use

The model was trained on GitHub code. As such it is not an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the Tech Assistant prompt you can turn it into a capable technical assistant.

Feel free to share your generations in the Community tab!

Generation

# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

checkpoint = "infosys/javacoder-1b"
device = "cuda" # for GPU usage or "cpu" for CPU usage

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)

inputs = tokenizer.encode("public class HelloWorld {\n    public static void main(String[] args) {", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Fill-in-the-middle

Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:

input_text = "<fim_prefix>public class HelloWorld {\n    public static void main(String[] args) {<fim_suffix>}\n}<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))