Datasets:
Formats:
parquet
Size:
10K - 100K
File size: 9,260 Bytes
fdf0449 9c8054a fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 67fe087 fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 48d91cc fdf0449 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 |
---
tags:
- computer-vision
- audio
- keypoint-detection
- animal-behavior
- multi-modal
- jsonl
dataset_info:
features:
- name: bird_id
dtype: string
- name: back_bbox_2d
sequence: float64
- name: back_keypoints_2d
sequence: float64
- name: back_view_boundary
sequence: int64
- name: bird_name
dtype: string
- name: video_name
dtype: string
- name: frame_name
dtype: string
- name: frame_path
dtype: image
- name: keypoints_3d
sequence:
sequence: float64
- name: radio_path
dtype: binary
- name: reprojection_error
sequence: float64
- name: side_bbox_2d
sequence: float64
- name: side_keypoints_2d
sequence: float64
- name: side_view_boundary
sequence: int64
- name: backpack_color
dtype: string
- name: experiment_id
dtype: string
- name: split
dtype: string
- name: top_bbox_2d
sequence: float64
- name: top_keypoints_2d
sequence: float64
- name: top_view_boundary
sequence: int64
- name: video_path
dtype: video
- name: acc_ch_map
struct:
- name: '0'
dtype: string
- name: '1'
dtype: string
- name: '2'
dtype: string
- name: '3'
dtype: string
- name: '4'
dtype: string
- name: '5'
dtype: string
- name: '6'
dtype: string
- name: '7'
dtype: string
- name: acc_sr
dtype: float64
- name: has_overlap
dtype: bool
- name: mic_ch_map
struct:
- name: '0'
dtype: string
- name: '1'
dtype: string
- name: '2'
dtype: string
- name: '3'
dtype: string
- name: '4'
dtype: string
- name: '5'
dtype: string
- name: '6'
dtype: string
- name: mic_sr
dtype: float64
- name: acc_path
dtype: audio
- name: mic_path
dtype: audio
- name: vocalization
list:
- name: overlap_type
dtype: string
- name: has_bird
dtype: bool
- name: 2ddistance
dtype: bool
- name: small_2ddistance
dtype: float64
- name: voc_metadata
sequence: float64
splits:
- name: train
num_bytes: 74517864701.0153
num_examples: 6804
- name: val
num_bytes: 32619282428.19056
num_examples: 2916
- name: test
num_bytes: 38018415640.55813
num_examples: 3431
download_size: 35456328366
dataset_size: 145155562769.764
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
# Bird3M Dataset
## Dataset Description
**Bird3M** is the first synchronized, multi-modal, multi-individual dataset designed for comprehensive behavioral analysis of freely interacting birds, specifically zebra finches, in naturalistic settings. It addresses the critical need for benchmark datasets that integrate precisely synchronized multi-modal recordings to support tasks such as 3D pose estimation, multi-animal tracking, sound source localization, and vocalization attribution. The dataset facilitates research in machine learning, neuroscience, and ethology by enabling the development of robust, unified models for long-term tracking and interpretation of complex social behaviors.
### Purpose
Bird3M bridges the gap in publicly available datasets for multi-modal animal behavior analysis by providing:
1. A benchmark for unified machine learning models tackling multiple behavioral tasks.
2. A platform for exploring efficient multi-modal information fusion.
3. A resource for ethological studies linking movement, vocalization, and social context to uncover neural and evolutionary mechanisms.
## Dataset Structure
The dataset is organized into three splits: `train`, `val`, and `test`, each as a Hugging Face `Dataset` object. Each row corresponds to a single bird instance in a video frame, with associated multi-modal data.
### Accessing Splits
```python
from datasets import load_dataset
dataset = load_dataset("anonymous-submission000/bird3m")
train_dataset = dataset["train"]
val_dataset = dataset["val"]
test_dataset = dataset["test"]
```
## Dataset Fields
Each example includes the following fields:
- **`bird_id`** (`string`): Unique identifier for the bird instance (e.g., "bird_1").
- **`back_bbox_2d`** (`Sequence[float64]`): 2D bounding box for the back view, format `[x_min, y_min, x_max, y_max]`.
- **`back_keypoints_2d`** (`Sequence[float64]`): 2D keypoints for the back view, format `[x1, y1, v1, x2, y2, v2, ...]`, where `v` is visibility (0: not labeled, 1: labeled but invisible, 2: visible).
- **`back_view_boundary`** (`Sequence[int64]`): Back view boundary, format `[x, y, width, height]`.
- **`bird_name`** (`string`): Biological identifier (e.g., "b13k20_f").
- **`video_name`** (`string`): Video file identifier (e.g., "BP_2020-10-13_19-44-38_564726_0240000").
- **`frame_name`** (`string`): Frame filename (e.g., "img00961.png").
- **`frame_path`** (`Image`): Path to the frame image (`.png`), loaded as a PIL Image.
- **`keypoints_3d`** (`Sequence[Sequence[float64]]`): 3D keypoints, format `[[x1, y1, z1], [x2, y2, z2], ...]`.
- **`radio_path`** (`binary`): Path to radio data (`.npz`), stored as binary.
- **`reprojection_error`** (`Sequence[float64]`): Reprojection errors for 3D keypoints.
- **`side_bbox_2d`** (`Sequence[float64]`): 2D bounding box for the side view.
- **`side_keypoints_2d`** (`Sequence[float64]`): 2D keypoints for the side view.
- **`side_view_boundary`** (`Sequence[int64]`): Side view boundary.
- **`backpack_color`** (`string`): Backpack tag color (e.g., "purple").
- **`experiment_id`** (`string`): Experiment identifier (e.g., "CopExpBP03").
- **`split`** (`string`): Dataset split ("train", "val", "test").
- **`top_bbox_2d`** (`Sequence[float64]`): 2D bounding box for the top view.
- **`top_keypoints_2d`** (`Sequence[float64]`): 2D keypoints for the top view.
- **`top_view_boundary`** (`Sequence[int64]`): Top view boundary.
- **`video_path`** (`Video`): Path to the video clip (`.mp4`), loaded as a Video object.
- **`acc_ch_map`** (`struct`): Maps accelerometer channels to bird identifiers.
- **`acc_sr`** (`float64`): Accelerometer sampling rate (Hz).
- **`has_overlap`** (`bool`): Indicates if accelerometer events overlap with vocalizations.
- **`mic_ch_map`** (`struct`): Maps microphone channels to descriptions.
- **`mic_sr`** (`float64`): Microphone sampling rate (Hz).
- **`acc_path`** (`Audio`): Path to accelerometer audio (`.wav`), loaded as an Audio signal.
- **`mic_path`** (`Audio`): Path to microphone audio (`.wav`), loaded as an Audio signal.
- **`vocalization`** (`list[struct]`): Vocalization events, each with:
- `overlap_type` (`string`): Overlap/attribution confidence.
- `has_bird` (`bool`): Indicates if attributed to a bird.
- `2ddistance` (`bool`): Indicates if 2D keypoint distance is <20px.
- `small_2ddistance` (`float64`): Minimum 2D keypoint distance (px).
- `voc_metadata` (`Sequence[float64]`): Onset/offset times `[onset_sec, offset_sec]`.
## How to Use
### Loading and Accessing Data
```python
from datasets import load_dataset
import numpy as np
# Load dataset
dataset = load_dataset("anonymous-submission000/bird3m")
train_data = dataset["train"]
# Access an example
example = train_data[0]
# Access fields
bird_id = example["bird_id"]
keypoints_3d = example["keypoints_3d"]
top_bbox = example["top_bbox_2d"]
vocalizations = example["vocalization"]
# Load multimedia
image = example["frame_path"] # PIL Image
video = example["video_path"] # Video object
mic_audio = example["mic_path"] # Audio signal
acc_audio = example["acc_path"] # Audio signal
# Access audio arrays
mic_array = mic_audio["array"]
mic_sr = mic_audio["sampling_rate"]
acc_array = acc_audio["array"]
acc_sr = acc_audio["sampling_rate"]
# Load radio data
radio_bytes = example["radio_path"]
try:
from io import BytesIO
radio_data = np.load(BytesIO(radio_bytes))
print("Radio data keys:", list(radio_data.keys()))
except Exception as e:
print(f"Could not load radio data: {e}")
# Print example info
print(f"Bird ID: {bird_id}")
print(f"Number of 3D keypoints: {len(keypoints_3d)}")
print(f"Top Bounding Box: {top_bbox}")
print(f"Number of vocalization events: {len(vocalizations)}")
if vocalizations:
first_vocal = vocalizations[0]
print(f"First vocal event metadata: {first_vocal['voc_metadata']}")
print(f"First vocal event overlap type: {first_vocal['overlap_type']}")
```
### Example: Extracting Vocalization Audio Clip
```python
if vocalizations and mic_sr:
onset, offset = vocalizations[0]["voc_metadata"]
onset_sample = int(onset * mic_sr)
offset_sample = int(offset * mic_sr)
vocal_audio_clip = mic_array[onset_sample:offset_sample]
print(f"Duration of first vocal clip: {offset - onset:.3f} seconds")
print(f"Shape of first vocal audio clip: {vocal_audio_clip.shape}")
```
**Code Availability**: Baseline code is available at [https://github.com/anonymoussubmission0000/bird3m](https://github.com/anonymoussubmission0000/bird3m).
## Citation
```bibtex
@article{2025bird3m,
title={Bird3M: A Multi-Modal Dataset for Social Behavior Analysis Tool Building},
author={tbd},
journal={arXiv preprint arXiv:XXXX.XXXXX},
year={2025}
}
``` |