Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
cedricbonhomme's picture
Update README.md
fb89b68 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: title
      dtype: string
    - name: description
      dtype: string
    - name: cpes
      sequence: string
    - name: cvss_v4_0
      dtype: float64
    - name: cvss_v3_1
      dtype: float64
    - name: cvss_v3_0
      dtype: float64
    - name: cvss_v2_0
      dtype: float64
  splits:
    - name: train
      num_bytes: 363023583.0092845
      num_examples: 559803
    - name: test
      num_bytes: 40336385.990715496
      num_examples: 62201
  download_size: 158862200
  dataset_size: 403359969
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
task_categories:
  - text-classification
license: cc-by-4.0
library_name: datasets
tags:
  - vulnerability
  - cybersecurity
  - security
  - cve
  - cvss

This dataset, CIRCL/vulnerability-scores, comprises over 600,000 real-world vulnerabilities used to train and evaluate VLAI, a transformer-based model designed to predict software vulnerability severity levels directly from text descriptions, enabling faster and more consistent triage.

The dataset is presented in the paper VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification.

Project page: https://vulnerability.circl.lu Associated code: https://github.com/vulnerability-lookup/ML-Gateway

Sources of the data

  • CVE Program (enriched with data from vulnrichment and Fraunhofer FKIE)
  • GitHub Security Advisories
  • PySec advisories
  • CSAF Red Hat
  • CSAF Cisco
  • CSAF CISA

Extracted from the database of Vulnerability-Lookup.
Dumps of the data are available here.

Query with datasets

import json
from datasets import load_dataset

dataset = load_dataset("CIRCL/vulnerability-scores")

vulnerabilities = ["CVE-2012-2339", "RHSA-2023:5964", "GHSA-7chm-34j8-4f22", "PYSEC-2024-225"]

filtered_entries = dataset.filter(lambda elem: elem["id"] in vulnerabilities)

for entry in filtered_entries["train"]:
    print(json.dumps(entry, indent=4))