import gradio as gr def render_submission_page(): text = r""" Want to submit your own system to the leaderboard? We accept submissions from both open source and proprietary systems. Instructions and submission form can be found here: [Submission Form](https://drive.google.com/file/d/1YmW3da68hYAWeTmMAJOcEgUlJG3iGXGx/view?usp=sharing). We request submitting teams to fill out this form and and reach out to use at ## General Instructions for commercial and open source systems In order to include the scores on this leaderboard and facilitate the verification of the system to be submitted, the submitting team has to provide the following artifacts along with the signed submission form. - Protocol files used to generate the scores for all the evaluation datasets listed on the leaderboard at the time of submission. - Score files generated using the submitted system for all the evaluation datasets listed on the leaderboard at the time of submission. - Number of parameters used in the system to be submitted ## The submitting team must abide by the following terms for the scores to be considered for evaluation: - The submitted system has not been trained, directly or indirectly, on the evaluation (test) or development sets of any dataset with a public license. This includes, but is not limited to, any form of supervised or unsupervised training, finetuning, or hyperparameter optimization involving these sets. - Reported scores correspond to a single single system evaluated consistently across the evaluation sets with the same checkpoint and parameters with no modifications in the hyperparameters. - Commercial systems with a proprietary license agree to grant API access to the DF Arena team if required strictly for verification purposes. - The DF Arena leaderboard will be updated periodically to include new datasets. The submitting team agrees to evaluate and submit scores on these additional datasets upon if requested in order to maintain a valid presence on the leaderboard. Submitting team acknowledges that any violation of the above may result in disqualification of the submission which includes removal of the system from the leaderboard and public disclosure of the disqualification on DF Arena’s official communication channels. Details regarding the list and URLs / sources used to obtain the evaluation datasets can be found as follows:- - [ASVSpoof2019](https://zenodo.org/records/6906306) - [ASVSpoof2021LA](https://zenodo.org/records/4837263) - [ASVSpoof2021DF](https://zenodo.org/records/4837263) - [ASVSpoof2024-Eval](https://zenodo.org/records/14498691) - [FakeOrReal](https://bil.eecs.yorku.ca/datasets/) - [Codecfake Yuankun et. al.](https://github.com/xieyuankun/Codecfake) - [ADD2022 Track 1](http://addchallenge.cn/databases2023) - [ADD2022 Track 3](http://addchallenge.cn/databases2023) - [ADD 2023 R1](http://addchallenge.cn/databases2023) - [ADD2023 R2](http://addchallenge.cn/databases2023) - [DFADD](https://github.com/isjwdu/DFADD) - [LibriVoc](https://github.com/csun22/Synthetic-Voice-Detection-Vocoder-Artifacts) - [SONAR](https://github.com/Jessegator/SONAR) - [In The Wild](https://deepfake-total.com/in_the_wild) """ return gr.Markdown(text)