Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -31,7 +31,6 @@ configs: | |
| 31 |  | 
| 32 | 
             
            Multimodal vision-language models (VLMs) have made substantial progress in various tasks that require a combined understanding of visual and textual content, particularly in cultural understanding tasks, with the emergence of new cultural datasets. 
         | 
| 33 | 
             
            However, these datasets frequently fall short of providing cultural reasoning while underrepresenting many cultures.
         | 
| 34 | 
            -
             | 
| 35 | 
             
            In this work, we introduce the Seeing Culture Benchmark (SCB), focusing on cultural reasoning with a novel approach that requires VLMs to reason on culturally rich images in two stages: i) selecting the correct visual option with multiple-choice visual question answering (VQA), and ii) segmenting the relevant cultural artifact as evidence of reasoning. Visual options in the first stage are systematically organized into three types: those originating from the same country, those from different countries, or a mixed group. 
         | 
| 36 | 
             
            Notably, all options are derived from a singular category for each type. Progression to the second stage occurs only after a correct visual option is chosen. 
         | 
| 37 |  | 
|  | |
| 31 |  | 
| 32 | 
             
            Multimodal vision-language models (VLMs) have made substantial progress in various tasks that require a combined understanding of visual and textual content, particularly in cultural understanding tasks, with the emergence of new cultural datasets. 
         | 
| 33 | 
             
            However, these datasets frequently fall short of providing cultural reasoning while underrepresenting many cultures.
         | 
|  | |
| 34 | 
             
            In this work, we introduce the Seeing Culture Benchmark (SCB), focusing on cultural reasoning with a novel approach that requires VLMs to reason on culturally rich images in two stages: i) selecting the correct visual option with multiple-choice visual question answering (VQA), and ii) segmenting the relevant cultural artifact as evidence of reasoning. Visual options in the first stage are systematically organized into three types: those originating from the same country, those from different countries, or a mixed group. 
         | 
| 35 | 
             
            Notably, all options are derived from a singular category for each type. Progression to the second stage occurs only after a correct visual option is chosen. 
         | 
| 36 |  | 

