alexspanellis-lo's picture
Update README.md
294b16a verified
---
license: other
license_name: lo-license
license_link: >-
https://customers.livingoptics.com/hubfs/Outbound/Legal/Living%20Optics%20EULA.pdf
task_categories:
- image-segmentation
- image-classification
language:
- en
tags:
- forensics
- blood detection
- blood classification
- hyperspectral
size_categories:
- 10K<n<100K
---
# Living Optics Forensics Dataset
![image/png](https://cdn-uploads.huggingface.co/production/uploads/66aa0ad8f46d069c6339c72c/oVtV8fWMlXljp9BKQEaGq.png)
## Overview
This dataset contains **224 images** captured during a **forensics application investigation** using the **Living Optics Camera**.
The data includes:
- **RGB images**
- **Sparse spectral samples**
- **Instance segmentation masks**
- **White reference spectra**
- **Libary spectra**
It is derived from over **200 unique raw files**, corresponding to 224 frames. The dataset has **not** been split into training/validation sets — the choice of split is left to the developer.
Annotation example can be seen below:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/668ff5cfd5298d417a272a59/5ZbWbWPjBE33khFL7KRjN.png)
### Contents
- **249 instances** of horse blood captured on various surfaces.
- **167 instances** of blood confusers (e.g., fake blood, ketchup) across **22 different surfaces**.
- A **total of 416 labeled instances**.
Additionally, the dataset contains **library spectra** captured with a spectrometer covering the wavelength range **350–1000 nm**, sampled at a higher resolution than the Living Optics camera.
These spectra can be used for:
- Spectral lookup–style algorithms
- Outlier filtering
- **Negative sampling** when spectra do not fall within labeled segmentation masks
Extra **unlabeled data** is available upon request.
## Classes
The dataset contains **25 classes**:
| ID | Class Name |
|-------|------------|
| 104 | Horse blood (sample) |
| 103 | Tomato ketchup (sample) |
| 106 | Red food dye (sample) |
| 107 | Fake blood (sample) |
| 1015 | 100% Cotton Shirt (White) (surface) |
| 1013 | 100% Cotton Shirt (Black) (surface) |
| 1012 | Light Fabric Lined Plywood (EF64) – 3 mm (surface) |
| 1010 | PVC (EF9) Black Plywood – 3 mm (surface) |
| 1007 | PVC (EF50) Light Woodgrain Plywood – 3 mm (surface) |
| 1016 | 100% Cotton Shirt (Brown) (surface) |
| 1011 | Normal Plywood – 3 mm (surface) |
| 1008 | PVC Walnut Woodgrain Plywood (EF326) – 3 mm (surface) |
| 1006 | PVC Leather (Black) (surface) |
| 1005 | PVC Leather (White) (surface) |
| 1004 | PVC Leather (Brown) (surface) |
| 1003 | PVC Leather (Red) (surface) |
| 1019 | Skinny Jeans (Light Blue) (surface) |
| 1024 | Dri-fit Shirt (Brown) (surface) |
| 1022 | Dri-fit Shirt (White) (surface) |
| 1018 | Skinny Jeans (Black) (surface) |
| 1017 | Skinny Jeans (Grey) (surface) |
| 1021 | Dri-fit Shirt (Red) (surface) |
| 1020 | Skinny Jeans (Dark Blue) (surface) |
| 1009 | PVC White Plywood – 3 mm (surface) |
| 1023 | Dri-fit Shirt (Black) (surface) |
| 1014 | 100% Cotton Shirt (Maroon) (surface) |
Unlabeled or background regions can be grouped into a single `"background"` class.
## Visualization
![image/png](https://cdn-uploads.huggingface.co/production/uploads/668ff5cfd5298d417a272a59/CzsXVbg8dfpVC6eAhcWVN.png)
## Requirements
- [lo-sdk](https://cloud.livingoptics.com/)
- [datareader](https://github.com/livingoptics/datareader.git)
## Download instructions
You can access this dataset via the [Living Optics Cloud Portal](https://cloud.livingoptics.com/shared-resources?downloadFile=data%2Fannotated-datasets%2FForensics-Dataset.zip).
See our [Spatial Spectral ML](https://github.com/livingoptics/spatial-spectral-ml) project for an example of how to train and run a segmentation and spectral classification algoirthm using this dataset.
## Usage
```python
import os
import numpy as np
import matplotlib.pyplot as plt
from lo_dataset_reader import DatasetReader, spectral_coordinate_indices_in_mask, rle_to_mask
os.environ["QT_QPA_PLATFORM"] = "xcb"
dataset_path = "/path/to/dataset"
dataset = DatasetReader(dataset_path, display_fig=True)
for idx, ((info, scene, spectra, unit, images_extern), (converted_spectra, converted_unit), annotations, library_spectra, labels) in enumerate(dataset):
for ann_idx, annotation in enumerate(annotations):
annotation["labels"] = labels
# Visualise the annotation on the scene
dataset.save_annotation_visualisation(scene, annotation, images_extern, ann_idx)
# Get spectrum stats from annotation
stats = annotation.get("extern", {}).get("stats", {})
label = stats.get("category")
mean_radiance_spectrum = stats.get("mean_radiance_spectrum")
mean_reflectance_spectrum = stats.get("mean_reflectance_spectrum")
# Get mask and spectral indices
mask = rle_to_mask(annotation["segmentation"], scene.shape)
spectral_indices = spectral_coordinate_indices_in_mask(mask, info.sampling_coordinates)
# Extract spectra and converted spectra
spec = spectra[spectral_indices, :]
if converted_spectra is not None:
conv_spec = converted_spectra[spectral_indices, :]
else:
conv_spec = None
# X-axis based on band index or wavelengths (optional)
x = np.arange(spec.shape[1])
if stats.get("wavelength_min") is not None and stats.get("wavelength_max") is not None:
x = np.linspace(stats["wavelength_min"], stats["wavelength_max"], spec.shape[1])
# Determine plot layout
if converted_spectra is not None:
fig, axs = plt.subplots(2, 2, figsize=(12, 8))
axs_top = axs[0]
axs_bottom = axs[1]
else:
fig, axs_top = plt.subplots(1, 2, figsize=(12, 4))
print(f"Warning: No converted_spectra for annotation '{label}'")
unit_label = unit.capitalize() if unit else "Radiance"
# (1,1) Individual spectra
for s in spec:
axs_top[0].plot(x, s, alpha=0.3)
axs_top[0].set_title(f"{unit_label.capitalize()} Spectra")
axs_top[0].set_xlabel("Wavelength")
axs_top[0].set_ylabel(f"{unit_label.capitalize()}")
# (1,2) Mean + Min/Max (Before conversion)
if mean_radiance_spectrum is not None:
spec_min = np.min(spec, axis=0)
spec_max = np.max(spec, axis=0)
axs_top[1].fill_between(x, spec_min, spec_max, color='lightblue', alpha=0.5, label='Min-Max Range')
axs_top[1].plot(x, mean_radiance_spectrum, color='blue', label=f'Mean {unit_label.capitalize()}')
axs_top[1].set_title(f"Extern Mean ± Range ({unit_label.capitalize()})")
axs_top[1].set_xlabel("Wavelength")
axs_top[1].set_ylabel(f"{unit_label.capitalize()}")
axs_top[1].legend()
# (2,1) and (2,2) Only if converted_spectra is available
if converted_spectra is not None and conv_spec is not None:
for s in conv_spec:
axs_bottom[0].plot(x, s, alpha=0.3)
axs_bottom[0].set_title(f"{converted_unit} Spectra")
axs_bottom[0].set_xlabel("Wavelength")
axs_bottom[0].set_ylabel(f"{converted_unit}")
if mean_reflectance_spectrum is not None:
conv_min = np.min(conv_spec, axis=0)
conv_max = np.max(conv_spec, axis=0)
axs_bottom[1].fill_between(x, conv_min, conv_max, color='lightgreen', alpha=0.5, label='Min-Max Range')
axs_bottom[1].plot(x, mean_reflectance_spectrum, color='green', label=f'Mean {converted_unit}')
axs_bottom[1].set_title(f"Extern Mean ± Range ({converted_unit})")
axs_bottom[1].set_xlabel("Wavelength")
axs_bottom[1].set_ylabel(f"{converted_unit}")
axs_bottom[1].legend()
fig.suptitle(f"Annotation {label}", fontsize=16)
plt.tight_layout()
plt.show()
```
For more details on the dataset format and reader see: [dataset format](https://github.com/livingoptics/datareader/blob/main/docs/lo_format_dataset.md)
## Citation
Raw data is available by request