Improve dataset card: Add task categories, paper link, and GitHub link

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -1,7 +1,12 @@
1
  ---
2
- license: mit
3
  language:
4
  - en
 
 
 
 
 
 
5
  tags:
6
  - behavior
7
  - motion
@@ -11,14 +16,12 @@ tags:
11
  - llm
12
  - vlm
13
  - esk
14
- size_categories:
15
- - 10K<n<100K
16
- task_categories:
17
- - question-answering
18
  ---
19
 
20
  # πŸ‹ EPFL-Smart-Kitchen: Lemonade benchmark
21
 
 
 
22
  ![title](media/title.svg)
23
 
24
  ## πŸ“š Introduction
@@ -26,7 +29,7 @@ task_categories:
26
  Lemonade consists of <span style="color: orange;">36,521</span> closed-ended QA pairs linked to egocentric video clips, categorized in three groups and six subcategories. <span style="color: orange;">18,857</span> QAs focus on behavior understanding, leveraging the rich ground truth behavior annotations of the EPFL-Smart Kitchen to interrogate models about perceived actions <span style="color: tomato;">(Perception)</span> and reason over unseen behaviors <span style="color: tomato;">(Reasoning)</span>. <span style="color: orange;">8,210</span> QAs involve longer video clips, challenging models in summarization <span style="color: gold;">(Summarization)</span> and session-level inference <span style="color: gold;">(Session properties)</span>. The remaining <span style="color: orange;">9,463</span> QAs leverage the 3D pose estimation data to infer hand shapes, joint angles <span style="color: skyblue;">(Physical attributes)</span>, or trajectory velocities <span style="color: skyblue;">(Kinematics)</span> from visual information.
27
 
28
  ## πŸ’Ύ Content
29
- The current repository contains all egocentric videos recorded in the EPFL-Smart-Kitchen-30 dataset and the question answer pairs of the Lemonade benchmark. Please refer to the [main GitHub repository](https://github.com/amathislab/EPFL-Smart-Kitchen#) to find the other benchmarks and links to download other modalities of the EPFL-Smart-Kitchen-30 dataset.
30
 
31
  ### πŸ—ƒοΈ Repository structure
32
 
@@ -63,8 +66,6 @@ Lemonade
63
  ## 🌈 Usage
64
  The evaluation of the benchmark can be done through the following github repository: [https://github.com/amathislab/lmms-eval-lemonade](https://github.com/amathislab/lmms-eval-lemonade)
65
 
66
-
67
-
68
  ## 🌟 Citations
69
  Please cite our work!
70
  ```
@@ -81,4 +82,4 @@ Please cite our work!
81
 
82
  ## ❀️ Acknowledgments
83
 
84
- Our work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services.
 
1
  ---
 
2
  language:
3
  - en
4
+ license: mit
5
+ size_categories:
6
+ - 10K<n<100K
7
+ task_categories:
8
+ - question-answering
9
+ - video-text-to-text
10
  tags:
11
  - behavior
12
  - motion
 
16
  - llm
17
  - vlm
18
  - esk
 
 
 
 
19
  ---
20
 
21
  # πŸ‹ EPFL-Smart-Kitchen: Lemonade benchmark
22
 
23
+ [Paper](https://huggingface.co/papers/2506.01608) | [GitHub](https://github.com/amathislab/EPFL-Smart-Kitchen)
24
+
25
  ![title](media/title.svg)
26
 
27
  ## πŸ“š Introduction
 
29
  Lemonade consists of <span style="color: orange;">36,521</span> closed-ended QA pairs linked to egocentric video clips, categorized in three groups and six subcategories. <span style="color: orange;">18,857</span> QAs focus on behavior understanding, leveraging the rich ground truth behavior annotations of the EPFL-Smart Kitchen to interrogate models about perceived actions <span style="color: tomato;">(Perception)</span> and reason over unseen behaviors <span style="color: tomato;">(Reasoning)</span>. <span style="color: orange;">8,210</span> QAs involve longer video clips, challenging models in summarization <span style="color: gold;">(Summarization)</span> and session-level inference <span style="color: gold;">(Session properties)</span>. The remaining <span style="color: orange;">9,463</span> QAs leverage the 3D pose estimation data to infer hand shapes, joint angles <span style="color: skyblue;">(Physical attributes)</span>, or trajectory velocities <span style="color: skyblue;">(Kinematics)</span> from visual information.
30
 
31
  ## πŸ’Ύ Content
32
+ The current repository contains all egocentric videos recorded in the EPFL-Smart-Kitchen-30 dataset and the question answer pairs of the Lemonade benchmark. Please refer to the [main GitHub repository](https://github.com/amathislab/EPFL-Smart-Kitchen) to find the other benchmarks and links to download other modalities of the EPFL-Smart-Kitchen-30 dataset.
33
 
34
  ### πŸ—ƒοΈ Repository structure
35
 
 
66
  ## 🌈 Usage
67
  The evaluation of the benchmark can be done through the following github repository: [https://github.com/amathislab/lmms-eval-lemonade](https://github.com/amathislab/lmms-eval-lemonade)
68
 
 
 
69
  ## 🌟 Citations
70
  Please cite our work!
71
  ```
 
82
 
83
  ## ❀️ Acknowledgments
84
 
85
+ Our work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services.