text
stringlengths 0
339
|
---|
release this data along with an evaluation framework at
|
https://www.github.com/openai/human-eval.
|
To solve a problem in our test set, we generate multiple
|
samples from the models, and check if any of them pass the
|
unit tests. With just a single sample, a 12B parameter Codex
|
solves 28.8% of these problems, and a 300M parameter
|
Codex solves 13.2% of these problems. In contrast, the 6B
|
parameter GPT-J (Wang & Komatsuzaki, 2021) achieves
|
11.4% on the same dataset, while all GPT models achieve
|
near 0%. To improve our model’s performance at the task of
|
function synthesis from docstrings, we fine-tune Codex on
|
standalone, correctly implemented functions. The resulting
|
model, Codex-S, solves 37.7% of problems with a single
|
sample. Figure 2 showcases problems of varying difficulty
|
in our dataset, along with correct model generated solutions.
|
Real-world programming tasks often involve iterations of
|
approaches and bug fixes, which is approximated by generating many samples from our models and selecting one that
|
passes all unit tests. Within 100 samples, Codex-S is able to
|
generate at least one correct function for 77.5% of the problems. This result suggests that accurate code samples can
|
be selected via heuristic ranking instead of fully evaluating
|
each sample, the latter of which may not be possible or practical in deployment. Indeed, we find that the sample with
|
highest mean log-probability passes unit tests for 44.5% of
|
the problems.
|
We conclude by discussing the limitations and potential
|
broader impacts of these Codex models and of increasingly
|
powerful code generating models more generally.
|
2. Evaluation Framework
|
In this section, we discuss the details of our evaluation
|
framework. We begin by defining the pass@k metric, and
|
explain its advantages over standard match-based metrics.
|
Next, we describe the dataset of hand-written problems,
|
called “HumanEval,” which we created in order to benchmark our models. Finally, we discuss the sandbox environment we used to safely execute model-generated code.
|
2.1. Functional Correctness
|
Generative models for code are predominantly benchmarked
|
by matching samples against a reference solution, where
|
the match can be exact or fuzzy (as in BLEU score). However, recent work has surfaced deficiencies in match-based
|
metrics for code. For instance, Ren et al. (2020) finds that
|
BLEU has problems capturing semantic features specific
|
to code, and suggests several semantic modifications to the
|
score.
|
More fundamentally, match-based metrics are unable to account for the large and complex space of programs functionally equivalent to a reference solution. As a consequence,
|
recent works in unsupervised code translation (Lachaux
|
et al., 2020) and pseudocode-to-code translation (Kulal et al.,
|
2019) have turned to functional correctness instead, where
|
a sample is considered correct if it passes a set of unit tests.
|
We argue that this metric should be applied to docstringconditional code generation as well.
|
Perhaps the most convincing reason to evaluate functional
|
correctness is that it is used by human developers to judge
|
code. A framework known as test-driven development dictates that software requirements be converted into test cases
|
before any implementation begins, and success is defined
|
by a program that passes these tests. While few organizations employ full test-driven development, integration of
|
new code is usually dependent on creating and passing unit
|
tests.
|
Kulal et al. (2019) evaluate functional correctness using
|
the pass@k metric, where k code samples are generated
|
per problem, a problem is considered solved if any sample
|
Evaluating Large Language Models Trained on Code
|
Figure 2. Three example problems from the HumanEval dataset, where the probabilities that a single sample from Codex-12B passes unit
|
tests are 0.9, 0.17, and 0.005. The prompt provided to the model is shown with a white background, and a successful model-generated
|
completion is shown in a yellow background. Though not a guarantee for problem novelty, all problems were hand-written and not
|
programmatically copied from existing sources. Random problems and samples can be found in Appendix B.
|
passes the unit tests, and the total fraction of problems
|
solved is reported. However, computing pass@k in this
|
way can have high variance. Instead, to evaluate pass@k,
|
we generate n ≥ k samples per task (in this paper, we
|
use n = 200 and k ≤ 100), count the number of correct
|
samples c ≤ n which pass unit tests, and calculate the
|
unbiased estimator
|
Oh, look who's finally waking up
|
You look pale, but that's to be expected
|
I did pump you fall of all sorts drugs after all
|
I can't imagine you'll be very coherent for a while yet
|
What was that? Speak up Cutie
|
I can barely hear you through those little squeak
|
Where are you? Here at my home more accurately in my basement and this little cage I set up for you
|
It's not the best setup up, that I'm still working on a more permanent location for you upstairs
|
So sorry, but you'll have to put up with this cold damper for a while longer
|
How are you feeling? Aside from now obvious? Leah, possible nausea, and headache
|
Alright
|
Of course, you're feeling sick
|
I'll get you some medicine for that later
|
So long as you behave
|
You don't remember what happened, do you? Can tell by the confused look on your face
|
He came into the cafe at work yet
|
I made Latte
|
You were mop around
|
Said you didn't have a date for Valentine's day
|
No girlfriend
|
No one close enough to even think about asking out I did my best to cheer you up
|
Put on my best cute for you and everything
|
Damn
|
Nothing I did worked
|
You were just miserable
|
You said you would done anything to get a Valentine so I helped you out
|
I've been a little something in your drink and your food
|
Oh, and that plate I accidentally broke and had you helped me cleanup was also covered in a drug
|
Wax wonders when it gets into your bloodstream
|
From that little cut you got? At first, I was a bit concerned
|
I used too much
|
But I couldn't ask it not working out
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.