sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
641d2fd9bacfcce2fdfa8c9c586e74fe843d7bef
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/distilbert-base-cased-distilled-squad
* Dataset: autoevaluate/squad-sample
* Config: autoevaluate--squad-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-66155224-f2a7-4c5e-94b3-a3683a04175e-2314
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-22T12:04:04+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/squad-sample"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/distilbert-base-cased-distilled-squad", "metrics": [], "dataset_name": "autoevaluate/squad-sample", "dataset_config": "autoevaluate--squad-sample", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-22T12:04:47+00:00
|
d86659a36094de76171db53a8dda513ffa5a838d
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: autoevaluate/summarization
* Dataset: autoevaluate/xsum-sample
* Config: autoevaluate--xsum-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-2dc683ab-6695-42ab-9eff-11dad91952e1-2415
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-22T12:06:50+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/xsum-sample"], "eval_info": {"task": "summarization", "model": "autoevaluate/summarization", "metrics": [], "dataset_name": "autoevaluate/xsum-sample", "dataset_config": "autoevaluate--xsum-sample", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
|
2022-08-22T12:07:28+00:00
|
d2fa13f1968351b546a9a5a89610817d868e1120
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: autoevaluate/translation
* Dataset: autoevaluate/wmt16-ro-en-sample
* Config: autoevaluate--wmt16-ro-en-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-8a305641-aedc-4d3a-9609-7f9f9c99c489-2616
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-22T12:24:07+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/wmt16-ro-en-sample"], "eval_info": {"task": "translation", "model": "autoevaluate/translation", "metrics": [], "dataset_name": "autoevaluate/wmt16-ro-en-sample", "dataset_config": "autoevaluate--wmt16-ro-en-sample", "dataset_split": "test", "col_mapping": {"source": "translation.ro", "target": "translation.en"}}}
|
2022-08-22T12:25:10+00:00
|
a51d02dac28333f43f90d7d07753ed6c3c47ede0
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-0c5b3473-b8bd-4084-ad01-6ee894dddf29-2917
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-22T12:34:59+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
|
2022-08-22T12:35:37+00:00
|
9000ce7fabbce934fc7637c7cd4736bf87a616b2
|
# MovieLens User Ratings
This dataset contains ~1M user ratings, consisting of ~10k of the most recent movies from the MovieLens 25M dataset, for which over 30k unique users have rated. The dataset is streamed from the MovieLens 25M dataset, filters for the recent movies, and returns the user ratings for those. After a few joins and checks, we get this dataset. Included are the URLs of the respective movie posters.
The dataset is part of an example on [building a movie recommendation engine](https://www.pinecone.io/docs/examples/movie-recommender-system/) with vector search.
|
pinecone/movielens-recent-ratings
|
[
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"movielens",
"recommendation",
"collaborative filtering",
"region:us"
] |
2022-08-22T15:42:11+00:00
|
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "MovieLens User Ratings", "tags": ["movielens", "recommendation", "collaborative filtering"]}
|
2022-08-23T09:00:17+00:00
|
99c0a674b67ae0789547e6475a2f62bad451b09c
|
gradio/transformers-stats-space-data
|
[
"license:mit",
"region:us"
] |
2022-08-22T19:20:24+00:00
|
{"license": "mit"}
|
2022-08-22T19:20:24+00:00
|
|
9742ee01a91a4f9aa3a779cde65ee80e55b95423
|
Yomyom52/sb1
|
[
"region:us"
] |
2022-08-23T10:01:05+00:00
|
{}
|
2022-08-23T10:01:22+00:00
|
|
69a856480564b5ef3e19e201f1ead5882ee3a3b0
|
mehdidn/ner
|
[
"license:other",
"region:us"
] |
2022-08-23T11:02:11+00:00
|
{"license": "other"}
|
2022-08-23T23:22:38+00:00
|
|
82eacf1bde1c93f90df5cc38f3093542ca0e6021
|
# Dataset Card for TexPrax
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://texprax.de/**
- **Repository: https://github.com/UKPLab/TexPrax**
- **Paper: https://arxiv.org/abs/2208.07846**
- **Leaderboard: n/a**
- **Point of Contact: Ji-Ung Lee (http://www.ukp.tu-darmstadt.de/)**
### Dataset Summary
This dataset contains dialogues collected from German factory workers at the _Center for industrial productivity_ ([CiP](https://www.prozesslernfabrik.de/)). The dialogues mostly concern issues workers encounter during their daily work, such as machines breaking down, material missing, etc. The dialogues are further expert-annotated on a sentence level (problem, cause, solution, other) for sentence classification and on a token level for named entity recognition using a BIO tagging scheme. Note, that the dataset was collected in three rounds, each around one year apart. Here, we provide the data only split into train and test data where the test data was collected at the last round (July 2022). Additionally, the data from the first round is split into two subdomains, industry 4.0 (industrie) and machining (zerspanung). The splits were made according to the respective groups of people working at different assembly lines in the factory.
### Supported Tasks and Leaderboards
This dataset supports the following tasks:
* Sentence classification
* Named entity recognition (will be updated soon with the new indexing)
* Dialog generation (so far not evaluated)
### Languages
German
## Dataset Structure
### Data Instances
On sentence level, each instance consists of the dialog-id, turn-id, sentence-id, the sentence (raw), the label, the domain, and the subsplit.
```
{"185";"562";993";"wie kriege ich die Dichtung raus?";"P";"n/a";"3"}
```
On token level, each instance consists of a unique identifier, a list of tokens containing the whole dialog, the list of labels (bio-tagged entities), and the subsplit.
```
{"178_0";"['Hi', 'wie', 'kriege', 'ich', 'die', 'Dichtung', 'raus', '?', 'in', 'der', 'Schublade', 'gibt', 'es', 'einen', 'Dichtungszieher']";"['O', 'O', 'O', 'O', 'O', 'B-PRE', 'O', 'O', 'O', 'O', 'B-LOC', 'O', 'O', 'O', 'B-PE']";"Batch 3"}
```
### Data Fields
Sentence level:
* dialog-id: unique identifier for the dialog
* turn-id: unique identifier for the turn
* sentence-id: unique identifier for the dialog
* sentence: the respective sentence
* label: the label (_P_ for Problem, _C_ for Cause, _S_ for solution, and _O_ for Other)
* domain: the subdomains where the data was collected from. Domains are industry, machining, or n/a (for batch 2 and batch 3).
* subsplit: the respective subsplit of the data (see below)
Token level:
* id: the identifier
* tokens: a list of tokens (i.e., the tokenized dialogue)
* entities: the named entity in a BIO scheme (_B-X_, _I-X_, or O).
* subsplit: the respective subsplit of the data (see below)
### Data Splits
The dataset is split into train and test splits, but contains further subsplits (subsplit column). Note, that the splits are collected at different times with some turnaround in the workforce. Hence, later data (especially the data from batch 2) contains more turns (due to increased search for a cause) as more inexperienced workers who newly joined were employed in the factory.
Train:
* Batch 1 industrie: data collected in October 2020 from workers in the industry 4.0 assembly line
* Batch 1 zerspanung: data collected in October 2020 from workers in the machining assembly line
* Batch 2: data collected in-between October 2021-June 2022 from all workers
Test:
* Batch 3: data collected in July 2022 together with the system usability study run
Sentence level statistics:
| Batch | Dialogues | Turns | Sentences |
|---|---|---|---|
| 1 | 81 | 246 | 553 |
| 2 | 97 | 309 | 432 |
| 3 | 24 | 36 | 42 |
| Overall | 202 | 591 | 1,027 |
Token level statistics:
[Needs to be added]
## Dataset Creation
### Curation Rationale
This dataset provides task-oriented dialogues that solve a very domain specific problem.
### Source Data
#### Initial Data Collection and Normalization
The data was generated by workers at the [CiP](https://www.prozesslernfabrik.de/). The data was collected in three rounds (October 2020, October 2021-June 2022, July 2022). As the dialogues occurred during their daily work, one distinct property of the dataset is that all dialogues are very informal 'ne', contain abbreviations 'vll', and filler words such as 'ah'. For a detailed description please see the [paper](https://arxiv.org/abs/2208.07846).
#### Who are the source language producers?
German factory workers working at the [CiP](https://www.prozesslernfabrik.de/)
### Annotations
#### Annotation process
**Token level.** Token level annotation was done by researchers who are responsible for supervising and teaching workers at the CiP. The data was first split into three parts, each annotated by one researcher. Next, each researcher cross-examined the other researchers' annotations. If there were disagreements, all three researchers discussed the final label.
**Sentence level.** Sentence level annotations were collected from the factory workers who also generated the dialogues. For details about the data collection, please see the [TexPrax demo paper](https://arxiv.org/abs/2208.07846).
#### Who are the annotators?
**Token level.** Researchers working at the CiP.
**Sentence level.** The factory workers themselves.
### Personal and Sensitive Information
This dataset is fully anonymized. All occurrences of names have been manually checked during annotation and replaced with a random token.
## Considerations for Using the Data
### Social Impact of Dataset
Informal language especially used in short messages, however, seldom considered in existing NLP datasets. This dataset could serve as an interesting evaluation task for transferring language models to low-resource, but highly specific domains. Moreover, we note that despite all abbreviations, typos, and local dialects used in the messages, all workers were able to understand the questions as well as replies. This should be a standard future NLP models should be able to uphold.
### Discussion of Biases
The dialogues are very much on a professional level. The workers were informed (and gave their consent) in advance that their messages are being recorded and processed, which may have influenced them to hold only professional conversations, hence, all dialogues concern inanimate objects (i.e., machines).
### Other Known Limitations
[More Information Needed]
## Additional Information
You can download the data via:
```
from datasets import load_dataset
dataset = load_dataset("UKPLab/TexPrax") # default config is sentence classification
dataset = load_dataset("UKPLab/TexPrax", "ner") # use the ner tag for named entity recognition
```
Please find more information about the code and how the data was collected on [GitHub](https://github.com/UKPLab/TexPrax).
### Dataset Curators
Curation is managed by our [data manager](https://www.informatik.tu-darmstadt.de/ukp/research_ukp/ukp_research_data_and_software/ukp_data_and_software.en.jsp) at UKP.
### Licensing Information
[CC-by-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
Please cite this data using:
```
@article{stangier2022texprax,
title={TexPrax: A Messaging Application for Ethical, Real-time Data Collection and Annotation},
author={Stangier, Lorenz and Lee, Ji-Ung and Wang, Yuxi and M{\"u}ller, Marvin and Frick, Nicholas and Metternich, Joachim and Gurevych, Iryna},
journal={arXiv preprint arXiv:2208.07846},
year={2022}
}
```
### Contributions
Thanks to [@Wuhn](https://github.com/Wuhn) for adding this dataset.
## Tags
annotations_creators:
- expert-generated
language:
- de
language_creators:
- expert-generated
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: TexPrax-Conversations
size_categories:
- n<1K
- 1K<n<10K
source_datasets:
- original
tags:
- dialog
- expert to expert conversations
- task-oriented
task_categories:
- token-classification
- text-classification
task_ids:
- named-entity-recognition
- multi-class-classification
|
UKPLab/TexPrax
|
[
"license:cc-by-nc-4.0",
"arxiv:2208.07846",
"region:us"
] |
2022-08-23T11:03:20+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-01-11T14:40:21+00:00
|
0ceebe0b11b8c2e0ccbe11b33c8b13530843ef2e
|
# Dataset Card for Audio Keyword Spotting
## Table of Contents
- [Table of Contents](#table-of-contents)
## Dataset Description
- **Homepage:** https://sil.ai.org
- **Point of Contact:** [SIL AI email](mailto:idx_aqua@sil.org)
- **Source Data:** [MLCommons/ml_spoken_words](https://huggingface.co/datasets/MLCommons/ml_spoken_words), [trabina GitHub](https://github.com/wswu/trabina)

## Dataset Summary
The initial version of this dataset is a subset of [MLCommons/ml_spoken_words](https://huggingface.co/datasets/MLCommons/ml_spoken_words), which is derived from Common Voice, designed for easier loading. Specifically, the subset consists of `ml_spoken_words` files filtered by the names and placenames transliterated in Bible translations, as found in [trabina](https://github.com/wswu/trabina). For our initial experiment, we have focused only on English, Spanish, and Indonesian, three languages whose name spellings are frequently used in other translations. We anticipate growing this dataset in the future to include additional keywords and other languages as the experiment progresses.
### Data Fields
* file: strinrelative audio path inside the archive
* is_valid: if a sample is valid
* language: language of an instance.
* speaker_id: unique id of a speaker. Can be "NA" if an instance is invalid
* gender: speaker gender. Can be one of `["MALE", "FEMALE", "OTHER", "NAN"]`
* keyword: word spoken in a current sample
* audio: a dictionary containing the relative path to the audio file,
the decoded audio array, and the sampling rate.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically
decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of
a large number of audio files might take a significant amount of time.
Thus, it is important to first query the sample index before the "audio" column,
i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`
### Data Splits
The data for each language is splitted into train / validation / test parts.
## Supported Tasks
Keyword spotting and spoken term search
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online.
You agree to not attempt to determine the identity of speakers.
### Licensing Information
The dataset is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and can be used for academic
research and commercial applications in keyword spotting and spoken term search.
|
sil-ai/audio-keyword-spotting
|
[
"task_categories:automatic-speech-recognition",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"source_datasets:MLCommons/ml_spoken_words",
"language:eng",
"language:en",
"language:spa",
"language:es",
"language:ind",
"language:id",
"license:cc-by-4.0",
"other-keyword-spotting",
"region:us"
] |
2022-08-23T12:36:51+00:00
|
{"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["eng", "en", "spa", "es", "ind", "id"], "license": "cc-by-4.0", "multilinguality": ["multilingual"], "source_datasets": ["extended|common_voice", "MLCommons/ml_spoken_words"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "Audio Keyword Spotting", "tags": ["other-keyword-spotting"]}
|
2023-07-24T17:08:02+00:00
|
8359df330efa22f5f856aba4b0c307ecdaf691e3
|
### Dataset Summary
Input data for the **first** phase of BERT pretraining (sequence length 128). All text is tokenized with [bert-base-uncased](https://huggingface.co/bert-base-uncased) tokenizer.
Data is obtained by concatenating and shuffling [wikipedia](https://huggingface.co/datasets/wikipedia) (split: `20220301.en`) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) datasets and running [reference BERT data preprocessor](https://github.com/google-research/bert/blob/master/create_pretraining_data.py) without masking and input duplication (`dupe_factor = 1`). Documents are split into sentences with the [NLTK](https://www.nltk.org/) sentence tokenizer (`nltk.tokenize.sent_tokenize`).
See the dataset for the **second** phase of pretraining: [bert_pretrain_phase2](https://huggingface.co/datasets/and111/bert_pretrain_phase2).
|
and111/bert_pretrain_phase1
|
[
"region:us"
] |
2022-08-23T12:51:03+00:00
|
{}
|
2022-08-23T16:14:31+00:00
|
18841ce4c41a94aaed0041342c6a7cb0c59cfcfe
|
# Dataset Card for Collection3
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Collection3 homepage](http://labinform.ru/pub/named_entities/index.htm)
- **Repository:** [Needs More Information]
- **Paper:** [Two-stage approach in Russian named entity recognition](https://ieeexplore.ieee.org/document/7584769)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Collection3 is a Russian dataset for named entity recognition annotated with LOC (location), PER (person), and ORG (organization) tags. Dataset is based on collection [Persons-1000](http://ai-center.botik.ru/Airec/index.php/ru/collections/28-persons-1000) originally containing 1000 news documents labeled only with names of persons.
Additional labels were obtained using guidelines similar to MUC-7 with web-based tool [Brat](http://brat.nlplab.org/) for collaborative text annotation.
Currently dataset contains 26K annotated named entities (11K Persons, 7K Locations and 8K Organizations).
Conversion to the IOB2 format and splitting into train, validation and test sets was done by [DeepPavlov team](http://files.deeppavlov.ai/deeppavlov_data/collection3_v2.tar.gz).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Russian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"id": "851",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 1, 2, 0, 0, 0],
"tokens": ['Главный', 'архитектор', 'программного', 'обеспечения', '(', 'ПО', ')', 'американского', 'высокотехнологичного', 'гиганта', 'Microsoft', 'Рэй', 'Оззи', 'покидает', 'компанию', '.']
}
```
### Data Fields
- id: a string feature.
- tokens: a list of string features.
- ner_tags: a list of classification labels (int). Full tagset with indices:
```
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6}
```
### Data Splits
|name|train|validation|test|
|---------|----:|---------:|---:|
|Collection3|9301|2153|1922|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{mozharova-loukachevitch-2016-two-stage-russian-ner,
author={Mozharova, Valerie and Loukachevitch, Natalia},
booktitle={2016 International FRUCT Conference on Intelligence, Social Media and Web (ISMW FRUCT)},
title={Two-stage approach in Russian named entity recognition},
year={2016},
pages={1-6},
doi={10.1109/FRUCT.2016.7584769}}
```
|
RCC-MSU/collection3
|
[
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ru",
"license:other",
"region:us"
] |
2022-08-23T13:03:02+00:00
|
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["ru"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Collection3", "tags": [], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "test", "num_bytes": 935298, "num_examples": 1922}, {"name": "train", "num_bytes": 4380588, "num_examples": 9301}, {"name": "validation", "num_bytes": 1020711, "num_examples": 2153}], "download_size": 878777, "dataset_size": 6336597}}
|
2023-01-31T09:47:58+00:00
|
1a5c9e376174dae432c38636a90aafb600204ecd
|
### Dataset Summary
Input data for the **second** phase of BERT pretraining (sequence length 512). All text is tokenized with [bert-base-uncased](https://huggingface.co/bert-base-uncased) tokenizer.
Data is obtained by concatenating and shuffling [wikipedia](https://huggingface.co/datasets/wikipedia) (split: `20220301.en`) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) datasets and running [reference BERT data preprocessor](https://github.com/google-research/bert/blob/master/create_pretraining_data.py) without masking and input duplication (`dupe_factor = 1`). Documents are split into sentences with the [NLTK](https://www.nltk.org/) sentence tokenizer (`nltk.tokenize.sent_tokenize`).
See the dataset for the **first** phase of pretraining: [bert_pretrain_phase1](https://huggingface.co/datasets/and111/bert_pretrain_phase1).
|
and111/bert_pretrain_phase2
|
[
"region:us"
] |
2022-08-23T13:17:50+00:00
|
{}
|
2022-08-24T13:01:12+00:00
|
e97515e0046d6edb35a7e3e236e7f898bf0b3222
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Graphcore/deberta-base-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-squad-3b1fb479-1302649847
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T13:35:09+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Graphcore/deberta-base-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T13:38:28+00:00
|
b1aa7d48bd28bf611cb1e24ebdacd4943790a24f
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yuvraj/summarizer-cnndm
* Dataset: sepidmnorozy/Urdu_sentiment
* Config: sepidmnorozy--Urdu_sentiment
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mwz](https://huggingface.co/mwz) for evaluating this model.
|
autoevaluate/autoeval-eval-project-sepidmnorozy__Urdu_sentiment-559fc5f8-1302749848
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T13:57:21+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sepidmnorozy/Urdu_sentiment"], "eval_info": {"task": "summarization", "model": "yuvraj/summarizer-cnndm", "metrics": ["accuracy"], "dataset_name": "sepidmnorozy/Urdu_sentiment", "dataset_config": "sepidmnorozy--Urdu_sentiment", "dataset_split": "train", "col_mapping": {"text": "text", "target": "label"}}}
|
2022-08-23T13:58:02+00:00
|
8024ae5e1f3ba083cbfca1e9b4499f4b38ff7b11
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-squad_v2-7b0e814c-1303349869
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T15:36:10+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/rob-base-superqa2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T15:38:54+00:00
|
ddd3894523954e4a2487931093cccd4a6ea182f4
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-adversarial_qa-92a1abad-1303449870
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T15:38:02+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/rob-base-superqa2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T15:39:03+00:00
|
4bb6b28f832a1118230451a2e98dfaab9409235f
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-adversarial_qa-0243fffc-1303549871
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T15:49:07+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/rob-base-superqa2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T15:50:06+00:00
|
86181b5c13aff9667b5513999aaf83d2747e49f8
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-squad-1eddc82e-1303649872
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T15:53:40+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/rob-base-superqa2", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T15:56:08+00:00
|
8f5518a06e4ace72e5a8e25399e30cd2c21dae81
|
cakiki/abc
|
[
"license:cc-by-4.0",
"region:us"
] |
2022-08-23T20:01:13+00:00
|
{"license": "cc-by-4.0"}
|
2022-08-23T20:08:54+00:00
|
|
0ae49250e4884b552f29252e529d01c77029581f
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-gc1
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-squad_v2-4a3c5c8d-1305249893
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T20:05:10+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/rob-base-gc1", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T20:07:54+00:00
|
9d29ec3eb036547043efdbef5aeafa474f678f0e
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/deb-base-gc2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-squad_v2-4a3c5c8d-1305249894
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T20:05:16+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/deb-base-gc2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T20:08:47+00:00
|
32c7f6b18f236793540e2161d62b9a722e0bf5d5
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-gc1
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-adversarial_qa-7ab9b963-1305349895
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T20:05:34+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/rob-base-gc1", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T20:06:32+00:00
|
c0672e0447fc2813a905c6d33718bea35650baa2
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/deb-base-gc2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-adversarial_qa-7ab9b963-1305349896
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T20:05:39+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/deb-base-gc2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T20:06:52+00:00
|
2c955c42d1e82b3e62b2f42b8639aa1d17be323a
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-gc1
* Dataset: quoref
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-quoref-bbfe943f-1305449897
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T20:06:57+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["quoref"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/rob-base-gc1", "metrics": [], "dataset_name": "quoref", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T20:08:05+00:00
|
adbb98bfc272bb274f22f4c978a4bce3607b3597
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/deb-base-gc2
* Dataset: quoref
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-quoref-bbfe943f-1305449898
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T20:07:03+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["quoref"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/deb-base-gc2", "metrics": [], "dataset_name": "quoref", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T20:08:26+00:00
|
75eff2931ed9963c2996d7744a83db02453b4e54
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa1
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-squad_v2-1e2c143e-1305549899
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T20:17:15+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/rob-base-superqa1", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T20:20:07+00:00
|
c66053954b69c9ab189d13ae97c0106e6d162ebe
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa1
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-adversarial_qa-b21f20c3-1305649900
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T20:17:44+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/rob-base-superqa1", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T20:18:46+00:00
|
335a5dd4efdc8cc6250a3c6f4a72c336f039f91e
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/rob-base-superqa1
* Dataset: quoref
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
|
autoevaluate/autoeval-eval-project-quoref-9c01ff03-1305849901
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-23T20:22:28+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["quoref"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/rob-base-superqa1", "metrics": [], "dataset_name": "quoref", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-23T20:42:05+00:00
|
59b17e6ed36b643b608da2d1e2fe8827278c2459
|
Wiki_dialog dataset with Inpainting (MLM) on dialog. Section 2.1 in paper : https://arxiv.org/abs/2205.09073
https://huggingface.co/datasets/djaym7/wiki_dialog
Access using
dataset = datasets.load_dataset('djaym7/wiki_dialog_mlm','OQ', beam_runner='DirectRunner')
|
djaym7/wiki_dialog_mlm
|
[
"license:apache-2.0",
"arxiv:2205.09073",
"region:us"
] |
2022-08-23T21:18:15+00:00
|
{"license": "apache-2.0"}
|
2022-08-23T21:23:32+00:00
|
07d3d059cbdce2156e917dfbc63d43f068f9efdb
|
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:dpovey@gmail.com)
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/siddhant/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/siddhant/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Myspeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
|
Sidd2899/MyspeechASR
|
[
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2022-08-24T05:00:58+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "task_ids": ["speaker-identification"], "paperswithcode_id": "librispeech-1", "pretty_name": "LibriSpeech"}
|
2022-09-01T11:36:24+00:00
|
c5d0fec0471ea24513d7f5f7de12d1d4daf8c70a
|
TeDriCS/tedrics-data
|
[
"region:us"
] |
2022-08-24T08:26:57+00:00
|
{}
|
2022-09-07T13:57:46+00:00
|
|
86a9aaf66354ef7537ceee351364693f948d8327
|
# Dataset Card for CodeQueries
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [How to use](#how-to-use)
- [Data Splits and Data Fields](#data-splits-and-data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Data](https://huggingface.co/datasets/thepurpleowl/codequeries)
- **Repository:** [Code](https://github.com/thepurpleowl/codequeries-benchmark)
- **Paper:**
### Dataset Summary
CodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning.
### Supported Tasks and Leaderboards
Extractive question answering for code, semantic understanding of code.
### Languages
The dataset contains code context from `python` files.
## Dataset Structure
### How to Use
The dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
```python
import datasets
# in addition to `twostep`, the other supported settings are <ideal/file_ideal/prefix>.
ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST)
print(next(iter(ds)))
#OUTPUT:
{'query_name': 'Unused import',
'code_file_path': 'rcbops/glance-buildpackage/glance/tests/unit/test_db.py',
'context_block': {'content': '# vim: tabstop=4 shiftwidth=4 softtabstop=4\n\n# Copyright 2010-2011 OpenStack, LLC\ ...',
'metadata': 'root',
'header': "['module', '___EOS___']",
'index': 0},
'answer_spans': [{'span': 'from glance.common import context',
'start_line': 19,
'start_column': 0,
'end_line': 19,
'end_column': 33}
],
'supporting_fact_spans': [],
'example_type': 1,
'single_hop': False,
'subtokenized_input_sequence': ['[CLS]_', 'Un', 'used_', 'import_', '[SEP]_', 'module_', '\\u\\u\\uEOS\\u\\u\\u_', '#', ' ', 'vim', ':', ...],
'label_sequence': [4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...],
'relevance_label': 1
}
```
### Data Splits and Data Fields
Detailed information on the data splits for proposed settings can be found in the paper.
In general, data splits in all the proposed settings have examples with the following fields -
```
- query_name (query name to uniquely identify the query)
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
- context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field and `twostep` has `context_block`]
- answer_spans (answer spans with metadata)
- supporting_fact_spans (supporting-fact spans with metadata)
- example_type (1(positive)) or 0(negative)) example type)
- single_hop (True or False - for query type)
- subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids]
- label_sequence (example subtoken labels)
- relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block) [only `twostep` setting has this field]
```
## Dataset Creation
The dataset is created using [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used.
## Additional Information
### Licensing Information
The source code repositories used for preparing CodeQueries are based on the [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) and are redistributable under the respective licenses. A Huggingface dataset for ETH Py150 Open is available [here](https://huggingface.co/datasets/eth_py150_open). The labeling prepared and provided by us as part of CodeQueries is released under the Apache-2.0 license.
|
thepurpleowl/codequeries
|
[
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:code",
"license:apache-2.0",
"neural modeling of code",
"code question answering",
"code semantic understanding",
"region:us"
] |
2022-08-24T08:27:43+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["code"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "codequeries", "tags": ["neural modeling of code", "code question answering", "code semantic understanding"]}
|
2023-06-03T11:50:46+00:00
|
2bb2848a1beb37f03ba3b09eac4401c290df503e
|
## Bibtex
```
@article{greff2021kubric,
title = {Kubric: a scalable dataset generator},
author = {Klaus Greff and Francois Belletti and Lucas Beyer and Carl Doersch and
Yilun Du and Daniel Duckworth and David J Fleet and Dan Gnanapragasam and
Florian Golemo and Charles Herrmann and Thomas Kipf and Abhijit Kundu and
Dmitry Lagun and Issam Laradji and Hsueh-Ti (Derek) Liu and Henning Meyer and
Yishu Miao and Derek Nowrouzezahrai and Cengiz Oztireli and Etienne Pot and
Noha Radwan and Daniel Rebain and Sara Sabour and Mehdi S. M. Sajjadi and Matan Sela and
Vincent Sitzmann and Austin Stone and Deqing Sun and Suhani Vora and Ziyu Wang and
Tianhao Wu and Kwang Moo Yi and Fangcheng Zhong and Andrea Tagliasacchi},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
# Kubric
A data generation pipeline for creating semi-realistic synthetic multi-object
videos with rich annotations such as instance segmentation masks, depth maps,
and optical flow.
## Motivation and design
We need better data for training and evaluating machine learning systems, especially in the collntext of unsupervised multi-object video understanding.
Current systems succeed on [toy datasets](https://github.com/deepmind/multi_object_datasets), but fail on real-world data.
Progress could be greatly accelerated if we had the ability to create suitable datasets of varying complexity on demand.
Kubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends.
## Getting started
For instructions, please refer to [https://kubric.readthedocs.io](https://kubric.readthedocs.io)
Assuming you have docker installed, to generate the data above simply execute:
```
git clone https://github.com/google-research/kubric.git
cd kubric
docker pull kubricdockerhub/kubruntu
docker run --rm --interactive \
--user $(id -u):$(id -g) \
--volume "$(pwd):/kubric" \
kubricdockerhub/kubruntu \
/usr/bin/python3 examples/helloworld.py
ls output
```
Kubric employs **Blender 2.93** (see [here](https://github.com/google-research/kubric/blob/01a08d274234f32f2adc4f7d5666b39490f953ad/docker/Blender.Dockerfile#L48)), so if you want to inspect the generated `*.blend` scene file for interactive inspection (i.e. without needing to render the scene), please make sure you have installed the correct Blender version.
## Requirements
- A pipeline for conveniently generating video data.
- Physics simulation for automatically generating physical interactions between multiple objects.
- Good control over the complexity of the generated data, so that we can evaluate individual aspects such as variability of objects and textures.
- Realism: Ideally, the ability to span the entire complexity range from CLEVR all the way to real-world video such as YouTube8. This is clearly not feasible, but we would like to get as close as possible.
- Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties)
- Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects)
## Challenges and datasets
Generally, we store datasets for the challenges in this [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/kubric-public).
More specifically, these challenges are *dataset contributions* of the Kubric CVPR'22 paper:
* [MOVi: Multi-Object Video](challenges/movi)
* [Texture-Structure in NeRF](challenges/texture_structure_nerf)
* [Optical Flow](challenges/optical_flow)
* [Pre-training Visual Representations](challenges/pretraining_visual)
* [Robust NeRF](challenges/robust_nerf)
* [Multi-View Object Matting](challenges/multiview_matting)
* [Complex BRDFs](challenges/complex_brdf)
* [Single View Reconstruction](challenges/single_view_reconstruction)
* [Video Based Reconstruction](challenges/video_based_reconstruction)
* [Point Tracking](challenges/point_tracking)
Pointers to additional datasets/workers:
* [ToyBox (from Neural Semantic Fields)](https://nesf3d.github.io)
* [MultiShapeNet (from Scene Representation Transformer)](https://srt-paper.github.io)
* [SyntheticTrio(from Controllable Neural Radiance Fields)](https://github.com/kacperkan/conerf-kubric-dataset#readme)
## Disclaimer
This is not an official Google Product
|
simulate-explorer/Example
|
[
"license:mit",
"region:us"
] |
2022-08-24T08:45:17+00:00
|
{"license": "mit"}
|
2022-08-29T10:34:36+00:00
|
6420a7628eb1cf05f5e24dd36501e47edc999a0a
|
albertvillanova/tmp-10
|
[
"language:ase",
"language:en",
"region:us"
] |
2022-08-24T10:05:32+00:00
|
{"language": ["ase", "en"]}
|
2022-08-24T14:41:27+00:00
|
|
738036ce5d904fdf2509ce44cd1d5d63b25582fa
|
This dataset is converted from duconv, durecdial, ecm, naturalconv, persona, tencent, kdconv, crosswoz,risawoz,diamante,restoration and LCCC-base 12 high quality datasets and is used for continue pretrain task for T5-pegasus in mengzi version.
|
Jaren/T5-dialogue-pretrain-data
|
[
"region:us"
] |
2022-08-24T10:39:09+00:00
|
{}
|
2022-08-30T14:01:24+00:00
|
73715a71e2f1d5eb20949bcadc921e7e32d97072
|
kdwm/weather-sentences
|
[
"license:mit",
"region:us"
] |
2022-08-24T11:10:55+00:00
|
{"license": "mit"}
|
2022-08-24T11:10:55+00:00
|
|
969692674a1c5bbb1469682eda42d81fe5c8d64d
|
dyhsup/CPR
|
[
"license:unknown",
"region:us"
] |
2022-08-24T12:05:19+00:00
|
{"license": "unknown"}
|
2022-08-24T12:05:19+00:00
|
|
bd9f47f758affab100c81931d6afba84bab9ae06
|
Warning: We don't follow the standard of the hugging face, download and process files according to your own needs.
There only contain the intra-sentence relationship.Gold is the positive from the original corpus.Positive is all the relationship intra-sentence.
|
dyhsup/ChemProt_CPR
|
[
"license:other",
"region:us"
] |
2022-08-24T12:05:55+00:00
|
{"license": "other"}
|
2022-08-31T11:09:31+00:00
|
6e9893e2a78b8fa852f3268583592f8c4e37362a
|
BigBang/rosetta_new
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2022-08-24T12:38:27+00:00
|
{"license": "cc-by-sa-4.0"}
|
2022-08-24T15:24:00+00:00
|
|
6f7dc71b8fd4e8aed7b04752b563c5edf84694c7
|
# Dataset Card for the EUR-Lex dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/
- **Repository:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/
- **Paper:** https://www.aclweb.org/anthology/P19-1636/
- **Leaderboard:** N/A
### Dataset Summary
EURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old.
EURLEX57K contains 57k legislative documents in English from EUR-Lex (https://eur-lex.europa.eu) with an average length of 727 words. Each document contains four major zones:
- the header, which includes the title and name of the legal body enforcing the legal act;
- the recitals, which are legal background references; and
- the main body, usually organized in articles.
**Labeling / Annotation**
All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/).
While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.
### Supported Tasks and Leaderboards
The dataset supports:
**Multi-label Text Classification:** Given the text of a document, a model predicts the relevant EUROVOC concepts.
**Few-shot and Zero-shot learning:** As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.
### Languages
All documents are written in English.
## Dataset Structure
### Data Instances
```json
{
"celex_id": "31979D0509",
"title": "79/509/EEC: Council Decision of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain",
"text": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
"eurovoc_concepts": ["192", "2356", "2560", "862", "863"]
}
```
### Data Fields
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`title`: (**str**) The title of the document.\
`text`: (**str**) The full content of each document, which is represented by its `header`, `recitals` and `main_body`.\
`eurovoc_concepts`: (**List[str]**) The relevant EUROVOC concepts (labels).
If you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: https://archive.org/download/EURLEX57K/eurovoc_concepts.jsonl
```python
import json
with open('./eurovoc_concepts.jsonl') as jsonl_file:
eurovoc_concepts = {json.loads(concept) for concept in jsonl_file.readlines()}
```
### Data Splits
| Split | No of Documents | Avg. words | Avg. labels |
| ------------------- | ------------------------------------ | --- | --- |
| Train | 45,000 | 729 | 5 |
|Development | 6,000 | 714 | 5 |
|Test | 6,000 | 725 | 5 |
## Dataset Creation
### Curation Rationale
The dataset was curated by Chalkidis et al. (2019).\
The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
### Source Data
#### Initial Data Collection and Normalization
The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed format.
The documents were downloaded from EUR-Lex portal in HTML format.
The relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
* The original documents are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed HTML format. The HTML code was striped and the documents split into sections.
* The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
#### Who are the annotators?
Publications Office of EU (https://publications.europa.eu/en)
### Personal and Sensitive Information
The dataset does not include personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Chalkidis et al. (2019)
### Licensing Information
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
### Citation Information
*Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.*
*Large-Scale Multi-Label Text Classification on EU Legislation.*
*Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019*
```
@inproceedings{chalkidis-etal-2019-large,
title = "Large-Scale Multi-Label Text Classification on {EU} Legislation",
author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Androutsopoulos, Ion",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1636",
doi = "10.18653/v1/P19-1636",
pages = "6314--6322"
}
```
### Contributions
Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
|
jonathanli/eurlex
|
[
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"legal-topic-classification",
"region:us"
] |
2022-08-24T14:28:36+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "paperswithcode_id": "eurlex57k", "pretty_name": "the EUR-Lex dataset", "tags": ["legal-topic-classification"]}
|
2022-10-24T14:26:49+00:00
|
2cf7ca5314557b4a3203da2f987ff122e87aebbb
|
mbarnig/Tatoeba-en-lb
|
[
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2022-08-24T14:37:35+00:00
|
{"license": "cc-by-nc-sa-4.0"}
|
2022-08-24T14:38:33+00:00
|
|
816621ee6b2c082e5e1062a5bad126feb81b9449
|
HF version of Edinburgh-NLP's [Code docstrings corpus](https://github.com/EdinburghNLP/code-docstring-corpus)
|
teven/code_docstring_corpus
|
[
"region:us"
] |
2022-08-24T15:04:17+00:00
|
{}
|
2022-08-24T19:01:58+00:00
|
1d750cb1af1c154e447d6baa330110933105a600
|
HF-datasets version of Deepmind's [code_contests](https://github.com/deepmind/code_contests) dataset, notably used for AlphaGo. 1 row per solution, no test data or incorrect solutions included (only name/source/description/solution/language/difficulty)
|
teven/code_contests
|
[
"region:us"
] |
2022-08-24T16:28:47+00:00
|
{}
|
2022-08-24T19:01:04+00:00
|
eef7d6d11d0e1bfe8cfab8e3030cb1ad35b45b49
|
gondolas/test
|
[
"license:unknown",
"region:us"
] |
2022-08-24T17:00:02+00:00
|
{"license": "unknown"}
|
2022-08-24T17:00:02+00:00
|
|
059b500407cd10d3d0254d9c143d353f89ed7271
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: FardinSaboori/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
|
autoevaluate/autoeval-eval-project-squad-54745b0c-1311450106
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-24T19:33:58+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "FardinSaboori/bert-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-24T19:36:33+00:00
|
00f6010354dc41b964436402e91548d954663e01
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/distilbert-base-uncased-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
|
autoevaluate/autoeval-eval-project-squad-54745b0c-1311450107
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-24T19:34:48+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/distilbert-base-uncased-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-24T19:37:00+00:00
|
d2e7a920820db43013d54b67ef1fc315cb5f55cb
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Aiyshwariya/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
|
autoevaluate/autoeval-eval-project-squad-54745b0c-1311450108
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-24T19:35:01+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Aiyshwariya/bert-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-24T19:37:49+00:00
|
df82aa55f008dabd0c2c2d4d58bf8ebb38ce1928
|
britneymuller/cnbc_newsfeed
|
[
"license:other",
"region:us"
] |
2022-08-24T22:04:10+00:00
|
{"license": "other"}
|
2022-08-24T22:04:39+00:00
|
|
4611cf4a48fe1e181ffc5e64a6b25c8a1a6b4c83
|
ZhangYuanhan/OmniBenchmark
|
[
"license:cc-by-nc-nd-4.0",
"region:us"
] |
2022-08-25T01:10:18+00:00
|
{"license": "cc-by-nc-nd-4.0"}
|
2022-08-25T01:10:18+00:00
|
|
f7396bc0d39f208076d0d8af13b4644dc3bdd7f8
|
# Digital Peter
The Peter dataset can be used for reading texts from the manuscripts written by Peter the Great. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
Paper is available at http://arxiv.org/abs/2103.09354
## Description
Digital Peter is an educational task with a historical slant created on the basis of several AI technologies (Computer Vision, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History (N.P.Lihachov mansion) of Russian Academy of Sciences, Federal Archival Agency of Russia and Russian State Archive of Ancient Acts.
A detailed description of the problem (with an immersion in the problem) can be found in [detailed_description_of_the_task_en.pdf](https://github.com/sberbank-ai/digital_peter_aij2020/blob/master/desc/detailed_description_of_the_task_en.pdf)
The dataset consists of 662 full page images and 9696 annotated text files. There are 265788 symbols and approximately 50998 words.
## Annotation format
The annotation is in COCO format. The `annotation.json` should have the following dictionaries:
- `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes).
- `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields:
- `file_name` - name of the image file.
- `id` for image id.
- `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:
- `image_id` - the index of the image on which the polygon is located.
- `category_id` - the polygon’s category index.
- ```attributes``` - dict with some additional annotatioin information. In the `translation` subdict you can find text translation for the line.
- `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.
## Competition
We held a competition based on Digital Peter dataset.
Here is github [link](https://github.com/sberbank-ai/digital_peter_aij2020). Here is competition [page](https://ods.ai/tracks/aij2020) (need to register).
|
ai-forever/Peter
|
[
"task_categories:image-segmentation",
"task_categories:object-detection",
"source_datasets:original",
"language:ru",
"license:mit",
"optical-character-recognition",
"text-detection",
"ocr",
"arxiv:2103.09354",
"region:us"
] |
2022-08-25T09:03:42+00:00
|
{"language": ["ru"], "license": ["mit"], "source_datasets": ["original"], "task_categories": ["image-segmentation", "object-detection"], "task_ids": [], "tags": ["optical-character-recognition", "text-detection", "ocr"]}
|
2022-10-25T10:09:06+00:00
|
e5af44c540cda2e9007ad35b7f8e994225da7786
|
gishnum/worldpopulation_neo4j_graph_dump
|
[
"license:gpl",
"region:us"
] |
2022-08-25T10:22:14+00:00
|
{"license": "gpl"}
|
2022-08-25T10:22:14+00:00
|
|
872656a156f32e4058307e50e234a44a727a9503
|
# Dataset Card for Wiki Toxic
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Wiki Toxic dataset is a modified, cleaned version of the dataset used in the [Kaggle Toxic Comment Classification challenge](https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/overview) from 2017/18. The dataset contains comments collected from Wikipedia forums and classifies them into two categories, `toxic` and `non-toxic`.
The Kaggle dataset was cleaned using the included `clean.py` file.
### Supported Tasks and Leaderboards
- Text Classification: the dataset can be used for training a model to recognise toxicity in sentences and classify them accordingly.
### Languages
The sole language used in the dataset is English.
## Dataset Structure
### Data Instances
For each data point, there is an id, the comment_text itself, and a label (0 for non-toxic, 1 for toxic).
```
{'id': 'a123a58f610cffbc',
'comment_text': '"This article SUCKS. It may be poorly written, poorly formatted, or full of pointless crap that no one cares about, and probably all of the above. If it can be rewritten into something less horrible, please, for the love of God, do so, before the vacuum caused by its utter lack of quality drags the rest of Wikipedia down into a bottomless pit of mediocrity."',
'label': 1}
```
### Data Fields
- `id`: A unique identifier string for each comment
- `comment_text`: A string containing the text of the comment
- `label`: An integer, either 0 if the comment is non-toxic, or 1 if the comment is toxic
### Data Splits
The Wiki Toxic dataset has three splits: *train*, *validation*, and *test*. The statistics for each split are below:
| Dataset Split | Number of data points in split |
| ----------- | ----------- |
| Train | 127,656 |
| Validation | 31,915 |
| Test | 63,978 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
OxAISH-AL-LLM/wiki_toxic
|
[
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other",
"language:en",
"license:cc0-1.0",
"wikipedia",
"toxicity",
"toxic comments",
"region:us"
] |
2022-08-25T11:59:12+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|other"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "Toxic Wikipedia Comments", "tags": ["wikipedia", "toxicity", "toxic comments"]}
|
2022-09-19T14:53:19+00:00
|
41688aa331d9ff438cd9a940495de12d6dd0bc8e
|
wushan/vehicle_qa
|
[
"license:apache-2.0",
"region:us"
] |
2022-08-25T12:12:17+00:00
|
{"license": "apache-2.0"}
|
2022-08-25T12:14:33+00:00
|
|
2dcf46e0fe13816745e79fab84347e5d71fe74cc
|
jokerak/camvid
|
[
"license:apache-2.0",
"region:us"
] |
2022-08-25T12:20:22+00:00
|
{"license": "apache-2.0"}
|
2022-08-25T12:34:19+00:00
|
|
54ee2d8c64d3d80a5e10ef6952a4466551834fc1
|
# Dataset Card for COYO-700M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [COYO homepage](https://kakaobrain.com/contents/?contentId=7eca73e3-3089-43cb-b701-332e8a1743fd)
- **Repository:** [COYO repository](https://github.com/kakaobrain/coyo-dataset)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [COYO email](coyo@kakaobrain.com)
### Dataset Summary
**COYO-700M** is a large-scale dataset that contains **747M image-text pairs** as well as many other **meta-attributes** to increase the usability to train various models. Our dataset follows a similar strategy to previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models
complementary to other similar datasets. For more details on the data acquisition process, please refer to the technical paper to be released later.
### Supported Tasks and Leaderboards
We empirically validated the quality of COYO dataset by re-implementing popular models such as [ALIGN](https://arxiv.org/abs/2102.05918), [unCLIP](https://arxiv.org/abs/2204.06125), and [ViT](https://arxiv.org/abs/2010.11929).
We trained these models on COYO-700M or its subsets from scratch, achieving competitive performance to the reported numbers or generated samples in the original papers.
Our pre-trained models and training codes will be released soon along with the technical paper.
### Languages
The texts in the COYO-700M dataset consist of English.
## Dataset Structure
### Data Instances
Each instance in COYO-700M represents single image-text pair information with meta-attributes:
```
{
'id': 841814333321,
'url': 'https://blog.dogsof.com/wp-content/uploads/2021/03/Image-from-iOS-5-e1614711641382.jpg',
'text': 'A Pomsky dog sitting and smiling in field of orange flowers',
'width': 1000,
'height': 988,
'image_phash': 'c9b6a7d8469c1959',
'text_length': 59,
'word_count': 11,
'num_tokens_bert': 13,
'num_tokens_gpt': 12,
'num_faces': 0,
'clip_similarity_vitb32': 0.4296875,
'clip_similarity_vitl14': 0.35205078125,
'nsfw_score_opennsfw2': 0.00031447410583496094,
'nsfw_score_gantman': 0.03298913687467575,
'watermark_score': 0.1014641746878624,
'aesthetic_score_laion_v2': 5.435476303100586
}
```
### Data Fields
| name | type | description |
|--------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| id | long | Unique 64-bit integer ID generated by [monotonically_increasing_id()](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html) |
| url | string | The image URL extracted from the `src` attribute of the `<img>` tag |
| text | string | The text extracted from the `alt` attribute of the `<img>` tag |
| width | integer | The width of the image |
| height | integer | The height of the image |
| image_phash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image |
| text_length | integer | The length of the text |
| word_count | integer | The number of words separated by spaces. |
| num_tokens_bert | integer | The number of tokens using [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) |
| num_tokens_gpt | integer | The number of tokens using [GPT2TokenizerFast](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast) |
| num_faces | integer | The number of faces in the image detected by [SCRFD](https://insightface.ai/scrfd) |
| clip_similarity_vitb32 | float | The cosine similarity between text and image(ViT-B/32) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) |
| clip_similarity_vitl14 | float | The cosine similarity between text and image(ViT-L/14) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) |
| nsfw_score_opennsfw2 | float | The NSFW score of the image by [OpenNSFW2](https://github.com/bhky/opennsfw2) |
| nsfw_score_gantman | float | The NSFW score of the image by [GantMan/NSFW](https://github.com/GantMan/nsfw_model) |
| watermark_score | float | The watermark probability of the image by our internal model |
| aesthetic_score_laion_v2 | float | The aesthetic score of the image by [LAION-Aesthetics-Predictor-V2](https://github.com/christophschuhmann/improved-aesthetic-predictor) |
### Data Splits
Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s).
## Dataset Creation
### Curation Rationale
Similar to most vision-and-language datasets, our primary goal in the data creation process is to collect many pairs of alt-text and image sources in HTML documents crawled from the web. Therefore, We attempted to eliminate uninformative images or texts with minimal cost and improve our dataset's usability by adding various meta-attributes. Users can use these meta-attributes to sample a subset from COYO-700M and use it to train the desired model. For instance, the *num_faces* attribute could be used to make a subset like *COYO-Faces* and develop a privacy-preserving generative model.
### Source Data
#### Initial Data Collection and Normalization
We collected about 10 billion pairs of alt-text and image sources in HTML documents in [CommonCrawl](https://commoncrawl.org/) from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and/or text level filtering process with minimal cost.
**Image Level**
* Included all image formats that [Pillow library](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html) can decode. (JPEG, WEBP, PNG, BMP, ...)
* Removed images less than 5KB image size.
* Removed images with an aspect ratio greater than 3.0.
* Removed images with min(width, height) < 200.
* Removed images with a score of [OpenNSFW2](https://github.com/bhky/opennsfw2) or [GantMan/NSFW](https://github.com/GantMan/nsfw_model) higher than 0.5.
* Removed all duplicate images based on the image [pHash](http://www.phash.org/) value from external public datasets.
* ImageNet-1K/21K, Flickr-30K, MS-COCO, CC-3M, CC-12M
**Text Level**
* Collected only English text using [cld3](https://github.com/google/cld3).
* Replaced consecutive whitespace characters with a single whitespace and removed the whitespace before and after the sentence.
(e.g. `"\n \n Load image into Gallery viewer, valentine&#39;s day roses\n \n" → "Load image into Gallery viewer, valentine&#39;s day roses"`)
* Removed texts with a length of 5 or less.
* Removed texts that do not have a noun form.
* Removed texts with less than 3 words or more than 256 words and texts over 1000 in length.
* Removed texts appearing more than 10 times.
(e.g. `“thumbnail for”, “image for”, “picture of”`)
* Removed texts containing NSFW words collected from [profanity_filter](https://github.com/rominf/profanity-filter/blob/master/profanity_filter/data/en_profane_words.txt), [better_profanity](https://github.com/snguyenthanh/better_profanity/blob/master/better_profanity/profanity_wordlist.txt), and [google_twunter_lol](https://gist.github.com/ryanlewis/a37739d710ccdb4b406d).
**Image-Text Level**
* Removed duplicated samples based on (image_phash, text).
(Different text may exist for the same image URL.)
#### Who are the source language producers?
[Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M.
### Annotations
#### Annotation process
The dataset was built in a fully automated process that did not require human annotation.
#### Who are the annotators?
No human annotation
### Personal and Sensitive Information
#### Disclaimer & Content Warning
The COYO dataset is recommended to be used for research purposes.
Kakao Brain tried to construct a "Safe" dataset when building the COYO dataset. (See [Data Filtering](#source-data) Section) Kakao Brain is constantly making efforts to create more "Safe" datasets.
However, despite these efforts, this large-scale dataset was not hand-picked by humans to avoid the risk due to its very large size (over 700M).
Keep in mind that the unscreened nature of the dataset means that the collected images can lead to strongly discomforting and disturbing content for humans.
The COYO dataset may contain some inappropriate data, and any problems resulting from such data are the full responsibility of the user who used it.
Therefore, it is strongly recommended that this dataset be used only for research, keeping this in mind when using the dataset, and Kakao Brain does not recommend using this dataset as it is without special processing to clear inappropriate data to create commercial products.
## Considerations for Using the Data
### Social Impact of Dataset
It will be described in a paper to be released soon.
### Discussion of Biases
It will be described in a paper to be released soon.
### Other Known Limitations
It will be described in a paper to be released soon.
## Additional Information
### Dataset Curators
COYO dataset was released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.
[coyo@kakaobrain.com](mailto:coyo@kakaobrain.com)
### Licensing Information
#### License
The COYO dataset of Kakao Brain is licensed under [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/).
The full license can be found in the [LICENSE.cc-by-4.0 file](./coyo-700m/blob/main/LICENSE.cc-by-4.0).
The dataset includes “Image URL” and “Text” collected from various sites by analyzing Common Crawl data, an open data web crawling project.
The collected data (images and text) is subject to the license to which each content belongs.
#### Obligation to use
While Open Source may be free to use, that does not mean it is free of obligation.
To determine whether your intended use of the COYO dataset is suitable for the CC-BY-4.0 license, please consider the license guide.
If you violate the license, you may be subject to legal action such as the prohibition of use or claim for damages depending on the use.
### Citation Information
If you apply this dataset to any project and research, please cite our code:
```
@misc{kakaobrain2022coyo-700m,
title = {COYO-700M: Image-Text Pair Dataset},
author = {Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim},
year = {2022},
howpublished = {\url{https://github.com/kakaobrain/coyo-dataset}},
}
```
### Contributions
- Minwoo Byeon ([@mwbyeon](https://github.com/mwbyeon))
- Beomhee Park ([@beomheepark](https://github.com/beomheepark))
- Haecheon Kim ([@HaecheonKim](https://github.com/HaecheonKim))
- Sungjun Lee ([@justhungryman](https://github.com/justHungryMan))
- Woonhyuk Baek ([@wbaek](https://github.com/wbaek))
- Saehoon Kim ([@saehoonkim](https://github.com/saehoonkim))
- and Kakao Brain Large-Scale AI Studio
|
kakaobrain/coyo-700m
|
[
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:zero-shot-classification",
"task_ids:image-captioning",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"image-text pairs",
"arxiv:2102.05918",
"arxiv:2204.06125",
"arxiv:2010.11929",
"region:us"
] |
2022-08-25T14:54:43+00:00
|
{"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["text-to-image", "image-to-text", "zero-shot-classification"], "task_ids": ["image-captioning"], "pretty_name": "COYO-700M", "tags": ["image-text pairs"]}
|
2022-08-30T18:07:52+00:00
|
60eceef746f537c1efe46ffd2d5485d631a9c9d8
|
Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
```
x_res = 256
y_res = 256
sample_rate = 22050
n_fft = 2048
hop_length = 512
```
|
teticio/audio-diffusion-256
|
[
"task_categories:image-to-image",
"size_categories:10K<n<100K",
"audio",
"spectrograms",
"region:us"
] |
2022-08-25T16:32:42+00:00
|
{"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-to-image"], "task_ids": [], "pretty_name": "Mel spectrograms of music", "tags": ["audio", "spectrograms"]}
|
2022-11-09T10:49:48+00:00
|
215f00fff5149f546b57ebdfd25104e0387f50b4
|
Maaly/bgc-gene
|
[
"license:apache-2.0",
"region:us"
] |
2022-08-25T20:28:12+00:00
|
{"license": "apache-2.0"}
|
2022-08-25T21:07:36+00:00
|
|
7e3aa1657134d5747ab9a1ab21afaf0666d811e9
|
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==4`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5482 | 0.2243 | 0.1578 | 0.2689 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5476 | 0.2209 | 0.1592 | 0.2650 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.548 | 0.2272 | 0.1611 | 0.2704 |
|
allenai/multixscience_sparse_mean
|
[
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] |
2022-08-25T21:58:26+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "paperswithcode_id": "multi-xscience", "pretty_name": "Multi-XScience"}
|
2022-11-24T16:48:30+00:00
|
59efc38ee73602367aa6f642820990b0175cb90f
|
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==20`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5482 | 0.2243 | 0.0547 | 0.4063 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5476 | 0.2209 | 0.0553 | 0.4026 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5480 | 0.2272 | 0.055 | 0.4039 |
|
allenai/multixscience_sparse_max
|
[
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] |
2022-08-25T22:00:00+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "paperswithcode_id": "multi-xscience", "pretty_name": "Multi-XScience"}
|
2022-11-24T16:36:31+00:00
|
6b16a554b543b30d49252e1b64b736716a107cd3
|
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@angelolab](https://github.com/angelolab) for adding this dataset.
|
angelolab/ark_example
|
[
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:no-annotation",
"size_categories:n<1K",
"source_datasets:original",
"license:apache-2.0",
"MIBI",
"Multiplexed-Imaging",
"region:us"
] |
2022-08-25T22:15:17+00:00
|
{"annotations_creators": ["no-annotation"], "language_creators": [], "language": [], "license": ["apache-2.0"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["image-segmentation"], "task_ids": ["instance-segmentation"], "pretty_name": "An example dataset for analyzing multiplexed imaging data.", "tags": ["MIBI", "Multiplexed-Imaging"]}
|
2023-11-28T20:05:52+00:00
|
11ef172f3c13e60eaf30fcf319e3919c760785fb
|
iejMac/CLIP-WebVid
|
[
"region:us"
] |
2022-08-25T22:31:56+00:00
|
{"license": "mit"}
|
2022-10-04T08:10:24+00:00
|
|
f9ad319b1eb78b0af0b1c8f5dc951c3092d6ee9c
|
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
merkalo-ziri/qa_shreded
|
[
"task_categories:question-answering",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:rus",
"license:other",
"region:us"
] |
2022-08-26T00:25:51+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["rus"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": [], "pretty_name": "qa_main", "tags": []}
|
2022-08-26T00:27:18+00:00
|
1ac837cf3234412532906d405756f6918233ca1e
|
justahandsomeboy/recipedia_1
|
[
"license:mit",
"region:us"
] |
2022-08-26T03:22:13+00:00
|
{"license": "mit"}
|
2022-08-26T03:22:13+00:00
|
|
488d2a94c56bd52eb4f69cecdd868204886e418e
|
Zaid/tatoeba_mt
|
[
"license:other",
"region:us"
] |
2022-08-26T03:37:02+00:00
|
{"license": "other"}
|
2022-08-26T03:55:12+00:00
|
|
227e4266899d746172ebd46f90e26af2d370f799
|
# Gameplay Images
## Dataset Description
- **Homepage:** [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images)
- **Download Size** 2.50 GiB
- **Generated Size** 1.68 GiB
- **Total Size** 4.19 GiB
A dataset from [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images).
This is a dataset of 10 very famous video games in the world.
These include
- Among Us
- Apex Legends
- Fortnite
- Forza Horizon
- Free Fire
- Genshin Impact
- God of War
- Minecraft
- Roblox
- Terraria
There are 1000 images per class and all are sized `640 x 360`. They are in the `.png` format.
This Dataset was made by saving frames every few seconds from famous gameplay videos on Youtube.
※ This dataset was uploaded in January 2022. Game content updated after that will not be included.
### License
CC-BY-4.0
## Dataset Structure
### Data Instance
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/Gameplay_Images")
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 10000
})
})
```
```python
>>> dataset["train"].features
{'image': Image(decode=True, id=None),
'label': ClassLabel(num_classes=10, names=['Among Us', 'Apex Legends', 'Fortnite', 'Forza Horizon', 'Free Fire', 'Genshin Impact', 'God of War', 'Minecraft', 'Roblox', 'Terraria'], id=None)}
```
### Data Size
download: 2.50 GiB<br>
generated: 1.68 GiB<br>
total: 4.19 GiB
### Data Fields
- image: `Image`
- A `PIL.Image.Image object` containing the image. size=640x360
- Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. `dataset[0]["image"]` should always be preferred over `dataset["image"][0]`.
- label: an int classification label.
Class Label Mappings:
```json
{
"Among Us": 0,
"Apex Legends": 1,
"Fortnite": 2,
"Forza Horizon": 3,
"Free Fire": 4,
"Genshin Impact": 5,
"God of War": 6,
"Minecraft": 7,
"Roblox": 8,
"Terraria": 9
}
```
```python
>>> dataset["train"][0]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=640x360>,
'label': 0}
```
### Data Splits
| | train |
| ---------- | -------- |
| # of data | 10000 |
### Note
#### train_test_split
```python
>>> ds_new = dataset["train"].train_test_split(0.2, seed=42, stratify_by_column="label")
>>> ds_new
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 8000
})
test: Dataset({
features: ['image', 'label'],
num_rows: 2000
})
})
```
|
Bingsu/Gameplay_Images
|
[
"task_categories:image-classification",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2022-08-26T03:42:10+00:00
|
{"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "pretty_name": "Gameplay Images"}
|
2022-08-26T04:31:58+00:00
|
863991fde636390a0678f092906ca0bbabdd8566
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-xsum
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@hgoyal194](https://huggingface.co/hgoyal194) for evaluating this model.
|
autoevaluate/autoeval-eval-project-samsum-61336320-1319050351
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T06:15:36+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-xsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
|
2022-08-26T06:18:03+00:00
|
804e9f8472494d582f9f6abd3c95ca92036513a5
|
## MEDIQUA2012-MAS task
source data is available [here](https://github.com/abachaa/MEDIQA2021/tree/main/Task2)
des:
1. data features
Multiple Answer Summarization with:
* key: key of each question
* question: question
* text: merge all text of all answers (if it is train-split, a merge of article and section part)
* sum\_abs: abstractive multiple answer summarization
* sum\_ext: extractive multiple answer summarization
2. train\_article / train\_sec
Same structure with train, but:
* train: text: merge all text of all answers (if it is train-split, a merge of article and section part)
* train\_article: text is a merge of all subanswers 's articles
* train\_sec: text is a merge of all subanswers 's sections
|
nbtpj/bionlp2021MAS
|
[
"license:afl-3.0",
"region:us"
] |
2022-08-26T07:52:54+00:00
|
{"license": "afl-3.0"}
|
2022-08-27T14:37:33+00:00
|
1aa5ac59eca5b4a5922cd999d83188ee40237277
|
# CLIP-BERT training data
This data was used to train the CLIP-BERT model first described in [this paper](https://arxiv.org/abs/2109.11321).
The dataset is based on text and images from MS COCO, SBU Captions, Visual Genome QA and Conceptual Captions.
The image features have been extracted using the CLIP model [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) available on Huggingface.
|
Lo/clip-bert-data
|
[
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"arxiv:2109.11321",
"region:us"
] |
2022-08-26T07:57:24+00:00
|
{"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"]}
|
2022-08-29T06:51:51+00:00
|
bc8abd0b59c26ab913464fb535e080c27dce15ff
|
The Wikipedia train data used to train BERT-base baselines and adapt vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?".
The data has been created from the "20200501.en" revision of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) on Huggingface.
|
Lo/adapt-pre-trained-VL-models-to-text-data-Wikipedia
|
[
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2022-08-26T08:06:59+00:00
|
{"language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"]}
|
2022-08-29T07:26:22+00:00
|
9006ce5811a9c44f8435dd489af9d18205f98a1d
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-emotion-2d469b4f-13675887
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T08:18:16+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
|
2022-08-26T08:18:42+00:00
|
78443d7167a2047753c11a3c595f95eeb0503c0d
|
This repository contains archives (zip files) for ShapeNetSem, a subset of [ShapeNet](https://shapenet.org) richly annotated with physical attributes.
Please see [DATA.md](DATA.md) for details about the data.
If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report and the "Semantically-enriched 3D Models for Common-sense Knowledge" workshop paper.
```
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
@article{savva2015semgeo,
title={{Semantically-Enriched 3D Models for Common-sense Knowledge}},
author={Manolis Savva and Angel X. Chang and Pat Hanrahan},
journal = {CVPR 2015 Workshop on Functionality, Physics, Intentionality and Causality},
year = {2015}
}
```
For more information, please contact us at shapenetwebmaster@gmail.com and indicate ShapeNetSem in the title of your email.
|
ShapeNet/ShapeNetSem-archive
|
[
"language:en",
"license:other",
"3D shapes",
"region:us"
] |
2022-08-26T08:34:36+00:00
|
{"language": ["en"], "license": "other", "pretty_name": "ShapeNetSem", "tags": ["3D shapes"], "extra_gated_heading": "Acknowledge license to accept the repository", "extra_gated_prompt": "To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the name of the **school or company** that you are affiliated with (the **Affiliation** field). After requesting access to this ShapeNet repo, you will be considered for access approval. \n\nAfter access approval, you (the \"Researcher\") receive permission to use the ShapeNet database (the \"Database\") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions: Researcher shall use the Database only for non-commercial research and educational purposes. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. The law of the State of New Jersey shall apply to all disputes under this agreement.\n\nFor access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with. Please actually fill out the fields (DO NOT put the word \"Advisor\" for PI/Advisor and the word \"School\" for \"Affiliation\", please specify the name of your advisor and the name of your school).", "extra_gated_fields": {"Name": "text", "PI/Advisor": "text", "Affiliation": "text", "Purpose": "text", "Country": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}}
|
2023-09-20T13:59:59+00:00
|
0efb24cbe6828a85771a28335c5f7b5626514d9b
|
This repository contains ShapeNetCore (v2), a subset of [ShapeNet](https://shapenet.org).
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in [WordNet 3.0](https://wordnet.princeton.edu/).
Please see [DATA.md](DATA.md) for details about the data.
If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report.
```
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
```
For more information, please contact us at shapenetwebmaster@gmail.com and indicate ShapeNetCore v2 in the title of your email.
|
ShapeNet/ShapeNetCore
|
[
"language:en",
"license:other",
"3D shapes",
"region:us"
] |
2022-08-26T08:34:57+00:00
|
{"language": ["en"], "license": "other", "pretty_name": "ShapeNetCore", "tags": ["3D shapes"], "extra_gated_heading": "Acknowledge license to accept the repository", "extra_gated_prompt": "To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the **school or company** that you are affiliated with (the **Affiliation** field). After requesting access to this ShapeNet repo, you will be considered for access approval. \n\nAfter access approval, you (the \"Researcher\") receive permission to use the ShapeNet database (the \"Database\") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions: Researcher shall use the Database only for non-commercial research and educational purposes. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. The law of the State of New Jersey shall apply to all disputes under this agreement.\n\nFor access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with. Please actually fill out the fields (DO NOT put the word \"Advisor\" for PI/Advisor and the word \"School\" for \"Affiliation\", please specify the name of your advisor and the name of your school).", "extra_gated_fields": {"Name": "text", "PI/Advisor": "text", "Affiliation": "text", "Purpose": "text", "Country": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}}
|
2023-09-20T14:05:48+00:00
|
161773d6bbc56e44575c2c3fe2eb367531843818
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-emotion-ed9fef1a-13685888
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T08:37:48+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
|
2022-08-26T08:38:16+00:00
|
9d1adbcfd839d250e57ba00f5626c2a9bc2ba7b6
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-emotion-a7ced70d-13715889
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T08:52:03+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
|
2022-08-26T08:52:29+00:00
|
a5841b873d4be24808b58c1273fde15f374aed41
|
NareshIT/javatraining
|
[
"license:other",
"region:us"
] |
2022-08-26T08:56:08+00:00
|
{"license": "other"}
|
2022-08-26T08:56:08+00:00
|
|
41b13853d318d8f2aac4db268055ab7c99d27d9f
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-emotion-1d3a2bc7-13735890
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T09:08:22+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
|
2022-08-26T09:08:48+00:00
|
6ab186192e317f65fb9f28127827c3b6a5001f30
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: josmunpen/mt5-small-spanish-summarization
* Dataset: LeoCordoba/CC-NEWS-ES-titles
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@LeoCordoba](https://huggingface.co/LeoCordoba) for evaluating this model.
|
autoevaluate/autoeval-eval-project-LeoCordoba__CC-NEWS-ES-titles-0e1ed2c7-1320150403
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T10:35:30+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["LeoCordoba/CC-NEWS-ES-titles"], "eval_info": {"task": "summarization", "model": "josmunpen/mt5-small-spanish-summarization", "metrics": [], "dataset_name": "LeoCordoba/CC-NEWS-ES-titles", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "output_text"}}}
|
2022-08-26T10:42:03+00:00
|
f8135894035cb2881d24390353fbf528fe3dc906
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: LeoCordoba/mt5-small-cc-news-es-titles
* Dataset: LeoCordoba/CC-NEWS-ES-titles
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@LeoCordoba](https://huggingface.co/LeoCordoba) for evaluating this model.
|
autoevaluate/autoeval-eval-project-LeoCordoba__CC-NEWS-ES-titles-0e1ed2c7-1320150404
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T10:35:36+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["LeoCordoba/CC-NEWS-ES-titles"], "eval_info": {"task": "summarization", "model": "LeoCordoba/mt5-small-cc-news-es-titles", "metrics": [], "dataset_name": "LeoCordoba/CC-NEWS-ES-titles", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "output_text"}}}
|
2022-08-26T10:42:07+00:00
|
6d228ace568d2c1de21d663452f1c25938774286
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-squad-b541c518-13705892
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T12:01:23+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-26T12:03:38+00:00
|
5261fdbd27f9caf2abd70fdb48963c829ef7c00e
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-squad-30a8951e-13725893
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T12:01:26+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-26T12:03:44+00:00
|
9cc1c7b8d9200c633fb1fdb3870ee18a43bcbc26
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-squad-08ca88d1-13695891
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T12:01:44+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-26T12:04:02+00:00
|
300aa70d0b8680b78f26487f34738c3ad25d20de
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: autoevaluate/entity-extraction
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-conll2003-90a08c43-13745894
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T12:01:58+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "autoevaluate/entity-extraction", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
|
2022-08-26T12:03:03+00:00
|
e897197576f659a384e06cdf1586482fa76efc87
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-squad-884b60f3-13755895
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T12:13:37+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-26T12:15:55+00:00
|
36a121215a184bceb6e183ddeb169beef7e8eab3
|
Shagun5/sandhi
|
[
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2022-08-26T13:13:49+00:00
|
{"license": "cc-by-nc-sa-4.0"}
|
2022-08-26T13:13:49+00:00
|
|
2e4b287dda99722789449ed901e31a6b153d7739
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Image Classification
* Model: abhishek/autotrain-dog-vs-food
* Dataset: sasha/dog-food
* Config: sasha--dog-food
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775897
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T13:54:51+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sasha/dog-food"], "eval_info": {"task": "image_binary_classification", "model": "abhishek/autotrain-dog-vs-food", "metrics": ["matthews_correlation"], "dataset_name": "sasha/dog-food", "dataset_config": "sasha--dog-food", "dataset_split": "train", "col_mapping": {"image": "image", "target": "label"}}}
|
2022-08-26T13:55:53+00:00
|
5cdc512c0c73bde43a077497e24fc006f149b377
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Image Classification
* Model: sasha/dog-food-swin-tiny-patch4-window7-224
* Dataset: sasha/dog-food
* Config: sasha--dog-food
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775898
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T13:54:55+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sasha/dog-food"], "eval_info": {"task": "image_binary_classification", "model": "sasha/dog-food-swin-tiny-patch4-window7-224", "metrics": ["matthews_correlation"], "dataset_name": "sasha/dog-food", "dataset_config": "sasha--dog-food", "dataset_split": "train", "col_mapping": {"image": "image", "target": "label"}}}
|
2022-08-26T13:55:52+00:00
|
f3ce6b224624d2dbb8fc7ba79ddddc4eb102c89e
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Image Classification
* Model: sasha/dog-food-convnext-tiny-224
* Dataset: sasha/dog-food
* Config: sasha--dog-food
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775899
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T13:55:02+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sasha/dog-food"], "eval_info": {"task": "image_binary_classification", "model": "sasha/dog-food-convnext-tiny-224", "metrics": ["matthews_correlation"], "dataset_name": "sasha/dog-food", "dataset_config": "sasha--dog-food", "dataset_split": "train", "col_mapping": {"image": "image", "target": "label"}}}
|
2022-08-26T13:55:56+00:00
|
5348159e41b3268f6acbd0fb8f548e2fcaa81dca
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Image Classification
* Model: sasha/dog-food-vit-base-patch16-224-in21k
* Dataset: sasha/dog-food
* Config: sasha--dog-food
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775900
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T13:55:08+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sasha/dog-food"], "eval_info": {"task": "image_binary_classification", "model": "sasha/dog-food-vit-base-patch16-224-in21k", "metrics": ["matthews_correlation"], "dataset_name": "sasha/dog-food", "dataset_config": "sasha--dog-food", "dataset_split": "train", "col_mapping": {"image": "image", "target": "label"}}}
|
2022-08-26T13:56:07+00:00
|
113d1a02c1000ed7d2fc83ea05b793aedf45ed04
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-emotion-8f618256-13785901
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T13:55:13+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
|
2022-08-26T13:55:39+00:00
|
7b656d3d66a90c5f20d5c39934ffdc4a7fca1b66
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Ahmed007/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-emotion-8f618256-13785902
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T13:55:19+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Ahmed007/distilbert-base-uncased-finetuned-emotion", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
|
2022-08-26T13:55:44+00:00
|
f806a9562420f08f3ac7be388014a057449722f5
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: tbasic5/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-emotion-04ae905d-13795904
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T14:05:11+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "tbasic5/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
|
2022-08-26T14:05:37+00:00
|
d9e7c98518e605a1caf45c3391939d2416aa0616
|
asaxena1990/dummyset2
|
[
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2022-08-26T14:10:41+00:00
|
{"license": "cc-by-nc-sa-4.0"}
|
2022-08-26T14:12:01+00:00
|
|
aacf079fc5d248f979e4a1c7dedf1fcdc07a2b69
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 123tarunanand/roberta-base-finetuned
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-squad_v2-bddd30a5-13805905
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T14:24:24+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "123tarunanand/roberta-base-finetuned", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
|
2022-08-26T14:27:25+00:00
|
86cb54e837d8bd67b8432be7b4a7a4e73f64535f
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: autoevaluate/glue-mrpc
* Dataset: glue
* Config: mrpc
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-glue-fa8727be-13825907
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T15:43:01+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "autoevaluate/glue-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "test", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
|
2022-08-26T15:43:30+00:00
|
5eb65ec3e766cf83f00e4bd20d7f214dfee652da
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
|
autoevaluate/autoeval-staging-eval-project-autoevaluate__zero-shot-classification-sample-c8bb9099-11
|
[
"autotrain",
"evaluation",
"region:us"
] |
2022-08-26T18:53:30+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
|
2022-08-26T18:54:42+00:00
|
322604b436887a56f8cbcdd4ed3ecf2e60a2a488
|
# Dataset Card for "ArabicNLPDataset"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/BihterDass/ArabicTextClassificationDataset]
- **Repository:** [https://github.com/BihterDass/ArabicTextClassificationDataset]
- **Size of downloaded dataset files:** 23.5 MB
- **Size of the generated dataset:** 23.5 MB
### Dataset Summary
The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
#### arabic-dataset-v1
- **Size of downloaded dataset files:** 23.5 MB
- **Size of the generated dataset:** 23.5 MB
### Data Fields
The data fields are the same among all splits.
#### arabic-dataset-v-v1
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0).
### Data Splits
| |train |validation|test |
|----|--------:|---------:|---------:|
|Data| 80000 | 10000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset.
|
BDas/ArabicNLPDataset
|
[
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"license:other",
"region:us"
] |
2022-08-26T20:33:24+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ar"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "ArabicNLPDataset"}
|
2022-09-26T17:52:01+00:00
|
5cd0772a7dcaeb16cf7ddf6fc845cc35cf5428a9
|
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4333 | 0.2163 | 0.1746 | 0.2636 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.378 | 0.1827 | 0.1559 | 0.2188 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.3928 | 0.1898 | 0.1672 | 0.2208 |
|
allenai/ms2_sparse_max
|
[
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"language:en",
"license:apache-2.0",
"region:us"
] |
2022-08-26T20:40:42+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-MS^2", "extended|other-Cochrane"], "task_categories": ["summarization", "text2text-generation"], "paperswithcode_id": "multi-document-summarization", "pretty_name": "MSLR Shared Task"}
|
2022-11-24T16:27:49+00:00
|
23755f1da3b2378649c7259cdb111bf6985dcbf4
|
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8793 | 0.7460 | 0.2213 | 0.8264 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8748 | 0.7453 | 0.2173 | 0.8232 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8775 | 0.7480 | 0.2187 | 0.8250 |
|
allenai/multinews_sparse_max
|
[
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] |
2022-08-26T20:41:47+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "paperswithcode_id": "multi-news", "pretty_name": "Multi-News", "train-eval-index": [{"config": "default", "task": "summarization", "task_id": "summarization", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"document": "text", "summary": "target"}, "metrics": [{"type": "rouge", "name": "Rouge"}]}]}
|
2022-11-24T21:34:53+00:00
|
ff35b25f752f55aa21076b843b81eceaf7720700
|
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==17`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4333 | 0.2163 | 0.2051 | 0.2197 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.3780 | 0.1827 | 0.1815 | 0.1792 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.3928 | 0.1898 | 0.1951 | 0.1820 |
|
allenai/ms2_sparse_mean
|
[
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"language:en",
"license:apache-2.0",
"region:us"
] |
2022-08-26T20:41:58+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-MS^2", "extended|other-Cochrane"], "task_categories": ["summarization", "text2text-generation"], "paperswithcode_id": "multi-document-summarization", "pretty_name": "MSLR Shared Task"}
|
2022-11-24T16:29:28+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.