Model Card for PathBLIP-2

A vision-language model built upon the BLIP-2 framework using BioGPT and HIPT for pathology report generation and cross-modal retrieval of melanocytic lesions.

Model Details

This repository contains multiple checkpoints for the model which was used for the experiments in the paper. The model was trained and evaluated on a dataset of 19,636 melanocytic lesion cases, consisting of one or more whole slide images (WSIs) and a pathology report, using different training configurations. The supporting code is available from the corresponding GitHub repository. We refer to the paper for more information regarding the dataset, training, evaluation, and limitations.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support