bird3m / README.md
anonymous-submission000's picture
Update README.md
7dd635d verified
metadata
tags:
  - computer-vision
  - audio
  - keypoint-detection
  - animal-behavior
  - multi-modal
  - jsonl
dataset_info:
  features:
    - name: bird_id
      dtype: string
    - name: back_bbox_2d
      sequence: float64
    - name: back_keypoints_2d
      sequence: float64
    - name: back_view_boundary
      sequence: int64
    - name: bird_name
      dtype: string
    - name: video_name
      dtype: string
    - name: frame_name
      dtype: string
    - name: frame_path
      dtype: image
    - name: keypoints_3d
      sequence:
        sequence: float64
    - name: radio_path
      dtype: binary
    - name: reprojection_error
      sequence: float64
    - name: side_bbox_2d
      sequence: float64
    - name: side_keypoints_2d
      sequence: float64
    - name: side_view_boundary
      sequence: int64
    - name: backpack_color
      dtype: string
    - name: experiment_id
      dtype: string
    - name: split
      dtype: string
    - name: top_bbox_2d
      sequence: float64
    - name: top_keypoints_2d
      sequence: float64
    - name: top_view_boundary
      sequence: int64
    - name: video_path
      dtype: video
    - name: acc_ch_map
      struct:
        - name: '0'
          dtype: string
        - name: '1'
          dtype: string
        - name: '2'
          dtype: string
        - name: '3'
          dtype: string
        - name: '4'
          dtype: string
        - name: '5'
          dtype: string
        - name: '6'
          dtype: string
        - name: '7'
          dtype: string
    - name: acc_sr
      dtype: float64
    - name: has_overlap
      dtype: bool
    - name: mic_ch_map
      struct:
        - name: '0'
          dtype: string
        - name: '1'
          dtype: string
        - name: '2'
          dtype: string
        - name: '3'
          dtype: string
        - name: '4'
          dtype: string
        - name: '5'
          dtype: string
        - name: '6'
          dtype: string
    - name: mic_sr
      dtype: float64
    - name: acc_path
      dtype: audio
    - name: mic_path
      dtype: audio
    - name: vocalization
      list:
        - name: overlap_type
          dtype: string
        - name: has_bird
          dtype: bool
        - name: 2ddistance
          dtype: bool
        - name: small_2ddistance
          dtype: float64
        - name: voc_metadata
          sequence: float64
  splits:
    - name: train
      num_bytes: 74517864701.0153
      num_examples: 6804
    - name: val
      num_bytes: 32619282428.19056
      num_examples: 2916
    - name: test
      num_bytes: 38018415640.55813
      num_examples: 3431
  download_size: 35456328366
  dataset_size: 145155562769.764
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: val
        path: data/val-*
      - split: test
        path: data/test-*

Bird3M Dataset

Dataset Description

Bird3M is the first synchronized, multi-modal, multi-individual dataset designed for comprehensive behavioral analysis of freely interacting birds, specifically zebra finches, in naturalistic settings. It addresses the critical need for benchmark datasets that integrate precisely synchronized multi-modal recordings to support tasks such as 3D pose estimation, multi-animal tracking, sound source localization, and vocalization attribution. The dataset facilitates research in machine learning, neuroscience, and ethology by enabling the development of robust, unified models for long-term tracking and interpretation of complex social behaviors.

Purpose

Bird3M bridges the gap in publicly available datasets for multi-modal animal behavior analysis by providing:

  1. A benchmark for unified machine learning models tackling multiple behavioral tasks.
  2. A platform for exploring efficient multi-modal information fusion.
  3. A resource for ethological studies linking movement, vocalization, and social context to uncover neural and evolutionary mechanisms.

Dataset Structure

The dataset is organized into three splits: train, val, and test, each as a Hugging Face Dataset object. Each row corresponds to a single bird instance in a video frame, with associated multi-modal data.

Accessing Splits

from datasets import load_dataset

dataset = load_dataset("anonymous-submission000/bird3m")
train_dataset = dataset["train"]
val_dataset = dataset["val"]
test_dataset = dataset["test"]

Dataset Fields

Each example includes the following fields:

  • bird_id (string): Unique identifier for the bird instance (e.g., "bird_1").
  • back_bbox_2d (Sequence[float64]): 2D bounding box for the back view, format [x_min, y_min, x_max, y_max].
  • back_keypoints_2d (Sequence[float64]): 2D keypoints for the back view, format [x1, y1, v1, x2, y2, v2, ...], where v is visibility (0: not labeled, 1: labeled but invisible, 2: visible).
  • back_view_boundary (Sequence[int64]): Back view boundary, format [x, y, width, height].
  • bird_name (string): Biological identifier (e.g., "b13k20_f").
  • video_name (string): Video file identifier (e.g., "BP_2020-10-13_19-44-38_564726_0240000").
  • frame_name (string): Frame filename (e.g., "img00961.png").
  • frame_path (Image): Path to the frame image (.png), loaded as a PIL Image.
  • keypoints_3d (Sequence[Sequence[float64]]): 3D keypoints, format [[x1, y1, z1], [x2, y2, z2], ...].
  • radio_path (binary): Path to radio data (.npz), stored as binary.
  • reprojection_error (Sequence[float64]): Reprojection errors for 3D keypoints.
  • side_bbox_2d (Sequence[float64]): 2D bounding box for the side view.
  • side_keypoints_2d (Sequence[float64]): 2D keypoints for the side view.
  • side_view_boundary (Sequence[int64]): Side view boundary.
  • backpack_color (string): Backpack tag color (e.g., "purple").
  • experiment_id (string): Experiment identifier (e.g., "CopExpBP03").
  • split (string): Dataset split ("train", "val", "test").
  • top_bbox_2d (Sequence[float64]): 2D bounding box for the top view.
  • top_keypoints_2d (Sequence[float64]): 2D keypoints for the top view.
  • top_view_boundary (Sequence[int64]): Top view boundary.
  • video_path (Video): Path to the video clip (.mp4), loaded as a Video object.
  • acc_ch_map (struct): Maps accelerometer channels to bird identifiers.
  • acc_sr (float64): Accelerometer sampling rate (Hz).
  • has_overlap (bool): Indicates if accelerometer events overlap with vocalizations.
  • mic_ch_map (struct): Maps microphone channels to descriptions.
  • mic_sr (float64): Microphone sampling rate (Hz).
  • acc_path (Audio): Path to accelerometer audio (.wav), loaded as an Audio signal.
  • mic_path (Audio): Path to microphone audio (.wav), loaded as an Audio signal.
  • vocalization (list[struct]): Vocalization events, each with:
    • overlap_type (string): Overlap/attribution confidence.
    • has_bird (bool): Indicates if attributed to a bird.
    • 2ddistance (bool): Indicates if 2D keypoint distance is <20px.
    • small_2ddistance (float64): Minimum 2D keypoint distance (px).
    • voc_metadata (Sequence[float64]): Onset/offset times [onset_sec, offset_sec].

How to Use

Loading and Accessing Data

from datasets import load_dataset
import numpy as np

# Load dataset
dataset = load_dataset("anonymous-submission000/bird3m")
train_data = dataset["train"]

# Access an example
example = train_data[0]

# Access fields
bird_id = example["bird_id"]
keypoints_3d = example["keypoints_3d"]
top_bbox = example["top_bbox_2d"]
vocalizations = example["vocalization"]

# Load multimedia
image = example["frame_path"]  # PIL Image
video = example["video_path"]  # Video object
mic_audio = example["mic_path"]  # Audio signal
acc_audio = example["acc_path"]  # Audio signal

# Access audio arrays
mic_array = mic_audio["array"]
mic_sr = mic_audio["sampling_rate"]
acc_array = acc_audio["array"]
acc_sr = acc_audio["sampling_rate"]

# Load radio data
radio_bytes = example["radio_path"]
try:
    from io import BytesIO
    radio_data = np.load(BytesIO(radio_bytes))
    print("Radio data keys:", list(radio_data.keys()))
except Exception as e:
    print(f"Could not load radio data: {e}")

# Print example info
print(f"Bird ID: {bird_id}")
print(f"Number of 3D keypoints: {len(keypoints_3d)}")
print(f"Top Bounding Box: {top_bbox}")
print(f"Number of vocalization events: {len(vocalizations)}")

if vocalizations:
    first_vocal = vocalizations[0]
    print(f"First vocal event metadata: {first_vocal['voc_metadata']}")
    print(f"First vocal event overlap type: {first_vocal['overlap_type']}")

Example: Extracting Vocalization Audio Clip

if vocalizations and mic_sr:
    onset, offset = vocalizations[0]["voc_metadata"]
    onset_sample = int(onset * mic_sr)
    offset_sample = int(offset * mic_sr)
    vocal_audio_clip = mic_array[onset_sample:offset_sample]
    print(f"Duration of first vocal clip: {offset - onset:.3f} seconds")
    print(f"Shape of first vocal audio clip: {vocal_audio_clip.shape}")

Code Availability: Baseline code is available at https://github.com/anonymoussubmission0000/bird3m.

Citation

@article{2025bird3m,
  title={Bird3M: A Multi-Modal Dataset for Social Behavior Analysis Tool Building},
  author={tbd},
  journal={arXiv preprint arXiv:XXXX.XXXXX},
  year={2025}
}