PL-BERT-wp-ca

Overview

Click to expand

Model Description

PL-BERT-wp-ca is a phoneme-level masked language model trained on Catalan text with diverse regional accents. It is based on the PL-BERT architecture, which learns phoneme representations via a BERT-style masked language modeling objective.

This model is designed to support phoneme-based text-to-speech (TTS) systems, including but not limited to StyleTTS2. Thanks to its Catalan-specific phoneme vocabulary and contextual embedding capabilities, it can serve as a phoneme encoder for any TTS architecture requiring phoneme-level features.

Features of our PL-BERT:

  • It is trained exclusively on Catalan phonemized text.
  • It uses a reduced phoneme vocabulary of 178 tokens.
  • It uses wordpiece tokenizer.
  • It includes a custom token_maps.pkl and adapted util.py.

Intended Uses and Limitations

Intended uses

  • Integration into phoneme-based TTS pipelines such as StyleTTS2, Matxa-TTS, or custom diffusion-based synthesizers.
  • Accent-aware synthesis and phoneme embedding extraction for Catalan.
  • Research on phoneme-level language modeling in low-resource or multi-accent settings.

Limitations

  • Not designed for general NLP tasks like classification or sentiment analysis.
  • Only supports Catalan phoneme tokens.
  • Some accents may be underrepresented in the training data.

How to Get Started with the Model

Here is an example of how to use this model within the StyleTTS2 framework:

  1. Clone the StyleTTS2 repository: https://github.com/yl4579/StyleTTS2

  2. Inside the Utils directory, create a new folder, for example: PLBERT_cat_multiaccent.

  3. Copy the following files into that folder:

    • config.yml (training configuration)
    • step_1000000.t7 (trained checkpoint)
    • token_maps.pkl (phoneme to ID mapping)
    • util.py (modified to fix position ID loading)
  4. In your StyleTTS2 configuration file, update the PLBERT_dir entry to:

    PLBERT_dir: Utils/PLBERT_cat_multiaccent

  5. Update the import statement in your code to:

    from Utils.PLBERT_cat_multiaccent.util import load_plbert

  6. Use espeak-ng with the language code ca to phonemize your Catalan text files for training and validation.

Note: Although this example uses StyleTTS2, the model is compatible with other TTS architectures that operate on phoneme sequences. You can use the contextualized phoneme embeddings from PL-BERT in any compatible synthesis system.


Training Details

Training data

The model was trained on a Catalan corpus phonemized using espeak-ng. The dataset includes sentences from speakers across Catalonia, Balearic Islands, and Valencia. It uses a consistent phoneme token set with boundary markers and masking tokens.

Tokenizer: custom (split using whitespaces)
Phoneme masking strategy: word-level and phoneme-level masking and replacement
Training steps: 1,000,000
Precision: Mixed (fp16)

Training configuration

Model parameters:

  • Vocabulary size: 178
  • Hidden size: 768
  • Attention heads: 12
  • Intermediate size: 2048
  • Number of layers: 12
  • Max position embeddings: 512
  • Dropout: 0.1

Other parameters:

  • Batch size: 8
  • Max mel length: 512
  • Word mask probability: 0.15
  • Phoneme mask probability: 0.1
  • Replacement probability: 0.2
  • Token separator: space
  • Token mask: M
  • Word separator ID: 102

Evaluation

The model has not been benchmarked via perplexity or extrinsic evaluation, but has been successfully integrated into TTS pipelines such as StyleTTS2, where it enables the synthesis of Catalan with regional accent variation.


Citation

If this code contributes to your research, please cite the work:

@misc{zevallosbertwpca,
      title={PL-BERT-wp-ca}, 
      author={Rodolfo Zevallos, Jose Giraldo and Carme Armentano-Oller},
      organization={Barcelona Supercomputing Center},
      url={https://huggingface.co/langtech-veu/PL-BERT-wp-ca},
      year={2025}
}

Additional Information

Author

The Language Technologies Laboratory of the Barcelona Supercomputing Center by Rodolfo Zevallos.

Contact

For further information, please send an email to langtech@bsc.es.

Copyright

Copyright(c) 2025 by Language Technologies Laboratory, Barcelona Supercomputing Center.

License

Apache-2.0

Funding

This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support