AndyBonnetto commited on
Commit
cadeeca
Β·
verified Β·
1 Parent(s): 98aa0dd

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +16 -10
README.md CHANGED
@@ -17,16 +17,18 @@ task_categories:
17
  - question-answering
18
  ---
19
 
20
- # EPFL Smart Kitchen: Lemonade benchmark
21
 
22
- ## Abstract
 
 
23
  we introduce Lemonade: **L**anguage models **E**valuation of **MO**tion a**N**d **A**ction-**D**riven **E**nquiries.
24
  Lemonade consists of 36,521 closed-ended QA pairs linked to egocentric video clips, categorized in three groups and six subcategories.
25
  18,857 QAs focus on behavior understanding, leveraging the rich ground truth behavior annotations of the EPFL-Smart Kitchen to interrogate models about perceived actions (Perception) and reason over unseen behaviors (Reasoning).
26
  8,210 QAs involve longer video clips, challenging models in summarization (Summarization) and session-level inference (Session properties).
27
  The remaining 9,463 QAs leverage the 3D pose estimation data to infer hand shapes, joint angles (Physical attributes), or trajectory velocities (Kinematics) from visual information.
28
 
29
- ## Content
30
  The current repository contains all egocentric videos recorded in the EPFL-Smart-Kitchen-30 dataset. You can download the rest of the dataset at ... and ... .
31
 
32
  ### Repository structure
@@ -34,10 +36,10 @@ The current repository contains all egocentric videos recorded in the EPFL-Smart
34
  ```
35
  Lemonade
36
  β”œβ”€β”€ MCQs
37
- └── lemonade_benchmark.csv
38
  β”œβ”€β”€ videos
39
- β”œβ”€β”€ YH2002_2023_12_04_10_15_23_hololens.mp4
40
- └── ..
41
  └── README.md
42
  ```
43
 
@@ -57,15 +59,19 @@ Lemonade
57
 
58
  > We refer the reader to the associated publication for details about data processing and tasks description.
59
 
 
 
60
 
61
- ## Usage
62
  The evaluation of the benchmark can be done through the following github repository: ... .
63
 
64
- ## Publications
 
 
65
  cite arxiv paper
66
 
67
- ## Acknowledgments
68
  We thank Andy Bonnetto for the design of the dataset and Matea Tashkovska for the adaptation of the evaluation platform. </br>
69
  We thank members of the Mathis Group for Computational Neuroscience \& AI (EPFL) for their feedback throughout the project.
70
  This work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.).
71
- We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services.
 
17
  - question-answering
18
  ---
19
 
20
+ # πŸ‹ EPFL-Smart-Kitchen: Lemonade benchmark
21
 
22
+ ![title](media/title.svg)
23
+
24
+ ## πŸ“š Introduction
25
  we introduce Lemonade: **L**anguage models **E**valuation of **MO**tion a**N**d **A**ction-**D**riven **E**nquiries.
26
  Lemonade consists of 36,521 closed-ended QA pairs linked to egocentric video clips, categorized in three groups and six subcategories.
27
  18,857 QAs focus on behavior understanding, leveraging the rich ground truth behavior annotations of the EPFL-Smart Kitchen to interrogate models about perceived actions (Perception) and reason over unseen behaviors (Reasoning).
28
  8,210 QAs involve longer video clips, challenging models in summarization (Summarization) and session-level inference (Session properties).
29
  The remaining 9,463 QAs leverage the 3D pose estimation data to infer hand shapes, joint angles (Physical attributes), or trajectory velocities (Kinematics) from visual information.
30
 
31
+ ## πŸ’Ύ Content
32
  The current repository contains all egocentric videos recorded in the EPFL-Smart-Kitchen-30 dataset. You can download the rest of the dataset at ... and ... .
33
 
34
  ### Repository structure
 
36
  ```
37
  Lemonade
38
  β”œβ”€β”€ MCQs
39
+ | └── lemonade_benchmark.csv
40
  β”œβ”€β”€ videos
41
+ | β”œβ”€β”€ YH2002_2023_12_04_10_15_23_hololens.mp4
42
+ | └── ..
43
  └── README.md
44
  ```
45
 
 
59
 
60
  > We refer the reader to the associated publication for details about data processing and tasks description.
61
 
62
+ ## πŸ“ˆ Evaluation results
63
+
64
 
65
+ ## 🌈 Usage
66
  The evaluation of the benchmark can be done through the following github repository: ... .
67
 
68
+
69
+
70
+ ## 🌟 Citations
71
  cite arxiv paper
72
 
73
+ ## ❀️ Acknowledgments
74
  We thank Andy Bonnetto for the design of the dataset and Matea Tashkovska for the adaptation of the evaluation platform. </br>
75
  We thank members of the Mathis Group for Computational Neuroscience \& AI (EPFL) for their feedback throughout the project.
76
  This work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.).
77
+ We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services.