Spaces:
Running
Running
title: Tcid | |
emoji: π | |
colorFrom: indigo | |
colorTo: pink | |
sdk: gradio | |
sdk_version: 5.38.0 | |
app_file: app.py | |
pinned: false | |
short_description: A dashboard | |
# TCID | |
This space displays the state of the `transformers` CI on two hardwares, for a subset of models. The CI is run daily, on both AMD MI325 and Nvidia A10. The CI runs a different number of tests for each model. When a test finishes, it is assigned a status depending on its outcome: | |
- passed: the test finsihed and the expected output (or outputs) were retrieved; | |
- failed: the test either did not finish or the output was different from the expected output; | |
- skipped: the test was not run, which usually happens when a test is incompatible with a model. For instance, some models skip `flash-attention`-related tests because they are incompatible with `flash-attention`; | |
- error: the test did not finish and python crashed; | |
The dashboard is divided in two main parts: | |
## Summary page | |
On the summary page, you can see a snapshot of the mix of test passed, failed and skipped for each model. The summary page also features an "Overall failures rate" for AMD and NVIDIA, which is computed this way: | |
```overall_failure_rate = (failed + error) / (passed + failed + error)``` | |
We do not account for the test skipped in this overall failure rate, because skipped test have no chance to neither pass nor fail. | |
## Models page | |
From the sidebar, you can access a detailled view of each model. In it, you will find the breakdown of test statuses and the names of the test that failed for single and multi-gpu runs. | |