Jerrycool commited on
Commit
b3f358f
·
verified ·
1 Parent(s): 0eccdd9

Update src/about.py

Browse files
Files changed (1) hide show
  1. src/about.py +2 -2
src/about.py CHANGED
@@ -33,10 +33,10 @@ INTRODUCTION_TEXT = """
33
  LLM_BENCHMARKS_TEXT = """
34
  ## MLE-Dojo
35
  MLE-Dojo, a Gym-style framework for systematically training, evaluating, and improving autonomous large language model (LLM) agents in iterative machine learning engineering (MLE) workflows.
36
- Unlike existing benchmarks that primarily rely on static datasets or single-attempt evaluations, MLE-Dojo provides an interactive environment enabling agents to iteratively experiment, debug, and refine solutions through structured feedback loops. Built upon 200+ real-world Kaggle challenges (e.g., tabular data analysis, computer vision, natural language processing, and time series forecasting). MLE-Dojo covers diverse, open-ended MLE tasks carefully curated to reflect realistic engineering scenarios such as data processing, architecture search, hyperparameter tuning, and code debugging.
37
  Its fully executable environment supports comprehensive agent training via both supervised fine-tuning and reinforcement learning, facilitating iterative experimentation, realistic data sampling, and real-time outcome verification.
38
 
39
- ## New Models
40
  We actively maintain this as a long-term real-time leaderboard with updated models and evaluation tasks to foster community-driven innovation.
41
  """
42
 
 
33
  LLM_BENCHMARKS_TEXT = """
34
  ## MLE-Dojo
35
  MLE-Dojo, a Gym-style framework for systematically training, evaluating, and improving autonomous large language model (LLM) agents in iterative machine learning engineering (MLE) workflows.
36
+ MLE-Dojo provides an interactive environment enabling agents to iteratively experiment, debug, and refine solutions through structured feedback loops. Built upon 200+ real-world Kaggle challenges (e.g., tabular data analysis, computer vision, natural language processing, and time series forecasting). MLE-Dojo covers diverse, open-ended MLE tasks carefully curated to reflect realistic engineering scenarios such as data processing, architecture search, hyperparameter tuning, and code debugging.
37
  Its fully executable environment supports comprehensive agent training via both supervised fine-tuning and reinforcement learning, facilitating iterative experimentation, realistic data sampling, and real-time outcome verification.
38
 
39
+ ## New Updates
40
  We actively maintain this as a long-term real-time leaderboard with updated models and evaluation tasks to foster community-driven innovation.
41
  """
42