id
int64 599M
3.26B
| number
int64 1
7.7k
| title
stringlengths 1
290
| body
stringlengths 0
228k
⌀ | state
stringclasses 2
values | html_url
stringlengths 46
51
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-07-23 08:04:53
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-07-23 18:53:44
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-07-23 16:44:42
⌀ | user
dict | labels
listlengths 0
4
| is_pull_request
bool 2
classes | comments
listlengths 0
0
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
609,064,987 | 24 |
Add checksums
|
### Checksums files
They are stored next to the dataset script in urls_checksums/checksums.txt.
They are used to check the integrity of the datasets downloaded files.
I kept the same format as tensorflow-datasets.
There is one checksums file for all configs.
### Load a dataset
When you do `load("squad")`, it will also download the checksums file and put it next to the script in nlp/datasets/hash/urls_checksums/checksums.txt.
It also verifies that the downloaded files checksums match the expected ones.
You can ignore checksum tests with `load("squad", ignore_checksums=True)` (under the hood it just adds `ignore_checksums=True` in the `DownloadConfig`)
### Test a dataset
There is a new command `nlp-cli test squad` that runs `download_and_prepare` to see if it runs ok, and that verifies that all the checksums match. Allowed arguments are `--name`, `--all_configs`, `--ignore_checksums` and `--register_checksums`.
### Register checksums
1. If the dataset has external dataset files
The command `nlp-cli test squad --register_checksums --all_configs` runs `download_and_prepare` on all configs to see if it runs ok, and it creates the checksums file.
You can also register one config at a time using `--name` instead ; the checksums file will be completed and not overwritten.
If the script is a local script, the checksum file is moved to urls_checksums/checksums.txt next to the local script, to enable the user to upload both the script and the checksums file afterwards with `nlp-cli upload squad`.
2. If the dataset files are all inside the directory of the dataset script
The user can directly do `nlp-cli upload squad --register_checksums`, as there is no need to download anything.
In this case however, all the dataset must be uploaded at once.
--
PS : it doesn't allow to register checksums for canonical datasets, the file has to be added manually on S3 for now (I guess ?)
Also I feel like we must be sure that this processes would not constrain too much any user from uploading its dataset.
Let me know what you think :)
|
closed
|
https://github.com/huggingface/datasets/pull/24
| 2020-04-29T13:37:29 | 2020-04-30T19:52:50 | 2020-04-30T19:52:49 |
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true |
[] |
608,508,706 | 23 |
Add metrics
|
This PR is a draft for adding metrics (sacrebleu and seqeval are added)
use case examples:
`import nlp`
**sacrebleu:**
```
refs = [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],
['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']]
sys = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.']
sacrebleu = nlp.load_metrics('sacrebleu')
print(sacrebleu.score)
```
**seqeval:**
```
y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
seqeval = nlp.load_metrics('seqeval')
print(seqeval.accuracy_score(y_true, y_pred)
print(seqeval.f1_score(y_true, y_pred)
```
_examples are taken from the corresponding web page_
your comments and suggestions are more than welcomed
|
closed
|
https://github.com/huggingface/datasets/pull/23
| 2020-04-28T18:02:05 | 2022-10-04T09:31:56 | 2020-05-11T08:19:38 |
{
"login": "mariamabarham",
"id": 38249783,
"type": "User"
}
|
[] | true |
[] |
608,298,586 | 22 |
adding bleu score code
|
this PR add the BLEU score metric to the lib. It can be tested by running the following code.
` from nlp.metrics import bleu
hyp1 = "It is a guide to action which ensures that the military always obeys the commands of the party"
ref1a = "It is a guide to action that ensures that the military forces always being under the commands of the party "
ref1b = "It is the guiding principle which guarantees the military force always being under the command of the Party"
ref1c = "It is the practical guide for the army always to heed the directions of the party"
list_of_references = [[ref1a, ref1b, ref1c]]
hypotheses = [hyp1]
bleu = bleu.bleu_score(list_of_references, hypotheses,4, smooth=True)
print(bleu) `
|
closed
|
https://github.com/huggingface/datasets/pull/22
| 2020-04-28T13:00:50 | 2020-04-28T17:48:20 | 2020-04-28T17:48:08 |
{
"login": "mariamabarham",
"id": 38249783,
"type": "User"
}
|
[] | true |
[] |
607,914,185 | 21 |
Cleanup Features - Updating convert command - Fix Download manager
|
This PR makes a number of changes:
# Updating `Features`
Features are a complex mechanism provided in `tfds` to be able to modify a dataset on-the-fly when serializing to disk and when loading from disk.
We don't really need this because (1) it hides too much from the user and (2) our datatype can be directly mapped to Arrow tables on drive so we usually don't need to change the format before/after serialization.
This PR extracts and refactors these features in a single `features.py` files. It still keep a number of features classes for easy compatibility with tfds, namely the `Sequence`, `Tensor`, `ClassLabel` and `Translation` features.
Some more complex features involving a pre-processing on-the-fly during serialization are kept:
- `ClassLabel` which are able to convert from label strings to integers,
- `Translation`which does some check on the languages.
# Updating the `convert` command
We do a few updates here
- following the simplification of the `features` (cf above), conversion are updated
- we also makes it simpler to convert a single file
- some code need to be fixed manually after conversion (e.g. to remove some encoding processing in former tfds `Text` features. We highlight this code with a "git merge conflict" style syntax for easy manual fixing.
# Fix download manager iterator
You kept me up quite late on Tuesday night with this `os.scandir` change @lhoestq ;-)
|
closed
|
https://github.com/huggingface/datasets/pull/21
| 2020-04-27T23:16:55 | 2020-05-01T09:29:47 | 2020-05-01T09:29:46 |
{
"login": "thomwolf",
"id": 7353373,
"type": "User"
}
|
[] | true |
[] |
607,313,557 | 20 |
remove boto3 and promise dependencies
|
With the new download manager, we don't need `promise` anymore.
I also removed `boto3` as in [this pr](https://github.com/huggingface/transformers/pull/3968)
|
closed
|
https://github.com/huggingface/datasets/pull/20
| 2020-04-27T07:39:45 | 2020-04-27T16:04:17 | 2020-04-27T14:15:45 |
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true |
[] |
606,400,645 | 19 |
Replace tf.constant for TF
|
Replace simple tf.constant type of Tensor to tf.ragged.constant which allows to have examples of different size in a tf.data.Dataset.
Now the training works with TF. Here the same example than for the PT in collab:
```python
import tensorflow as tf
import nlp
from transformers import BertTokenizerFast, TFBertForQuestionAnswering
# Load our training dataset and tokenizer
train_dataset = nlp.load('squad', split="train[:1%]")
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
def get_correct_alignement(context, answer):
start_idx = answer['answer_start'][0]
text = answer['text'][0]
end_idx = start_idx + len(text)
if context[start_idx:end_idx] == text:
return start_idx, end_idx # When the gold label position is good
elif context[start_idx-1:end_idx-1] == text:
return start_idx-1, end_idx-1 # When the gold label is off by one character
elif context[start_idx-2:end_idx-2] == text:
return start_idx-2, end_idx-2 # When the gold label is off by two character
else:
raise ValueError()
# Tokenize our training dataset
def convert_to_features(example_batch):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = list(zip(example_batch['context'], example_batch['question']))
encodings = tokenizer.batch_encode_plus(input_pairs, pad_to_max_length=True)
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methods.
start_positions, end_positions = [], []
for i, (context, answer) in enumerate(zip(example_batch['context'], example_batch['answers'])):
start_idx, end_idx = get_correct_alignement(context, answer)
start_positions.append([encodings.char_to_token(i, start_idx)])
end_positions.append([encodings.char_to_token(i, end_idx-1)])
if start_positions and end_positions:
encodings.update({'start_positions': start_positions,
'end_positions': end_positions})
return encodings
train_dataset = train_dataset.map(convert_to_features, batched=True)
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_dataset[x] for x in columns[:3]}
labels = {"output_1": train_dataset["start_positions"]}
labels["output_2"] = train_dataset["end_positions"]
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'output_1': loss_fn, 'output_2': loss_fn},
loss_weights={'output_1': 1., 'output_2': 1.},
metrics=['accuracy'])
model.fit(tfdataset, epochs=1, steps_per_epoch=3)
```
|
closed
|
https://github.com/huggingface/datasets/pull/19
| 2020-04-24T15:32:06 | 2020-04-29T09:27:08 | 2020-04-25T21:18:45 |
{
"login": "jplu",
"id": 959590,
"type": "User"
}
|
[] | true |
[] |
606,109,196 | 18 |
Updating caching mechanism - Allow dependency in dataset processing scripts - Fix style and quality in the repo
|
This PR has a lot of content (might be hard to review, sorry, in particular because I fixed the style in the repo at the same time).
# Style & quality:
You can now install the style and quality tools with `pip install -e .[quality]`. This will install black, the compatible version of sort and flake8.
You can then clean the style and check the quality before merging your PR with:
```bash
make style
make quality
```
# Allow dependencies in dataset processing scripts
We can now allow (some level) of imports in dataset processing scripts (in addition to PyPi imports).
Namely, you can do the two following things:
Import from a relative path to a file in the same folder as the dataset processing script:
```python
import .c4_utils
```
Or import from a relative path to a file in a folder/archive/github repo to which you provide an URL after the import state with `# From: [URL]`:
```python
import .clicr.dataset_code.build_json_dataset # From: https://github.com/clips/clicr
```
In both these cases, after downloading the main dataset processing script, we will identify the location of these dependencies, download them and copy them in the dataset processing script folder.
Note that only direct import in the dataset processing script will be handled.
We don't recursively explore the additional import to download further files.
Also, when we download from an additional directory (in the second case above), we recursively add `__init__.py` to all the sub-folder so you can import from them.
This part is still tested for now. If you've seen datasets which required external utilities, tell me and I can test it.
# Update the cache to have a better local structure
The local structure in the `src/datasets` folder is now: `src/datasets/DATASET_NAME/DATASET_HASH/*`
The hash is computed from the full code of the dataset processing script as well as all the local and downloaded dependencies as mentioned above. This way if you change some code in a utility related to your dataset, a new hash should be computed.
|
closed
|
https://github.com/huggingface/datasets/pull/18
| 2020-04-24T07:39:48 | 2020-04-29T15:27:28 | 2020-04-28T16:06:28 |
{
"login": "thomwolf",
"id": 7353373,
"type": "User"
}
|
[] | true |
[] |
605,753,027 | 17 |
Add Pandas as format type
|
As detailed in the title ^^
|
closed
|
https://github.com/huggingface/datasets/pull/17
| 2020-04-23T18:20:14 | 2020-04-27T18:07:50 | 2020-04-27T18:07:48 |
{
"login": "jplu",
"id": 959590,
"type": "User"
}
|
[] | true |
[] |
605,661,462 | 16 |
create our own DownloadManager
|
I tried to create our own - and way simpler - download manager, by replacing all the complicated stuff with our own `cached_path` solution.
With this implementation, I tried `dataset = nlp.load('squad')` and it seems to work fine.
For the implementation, what I did exactly:
- I copied the old download manager
- I removed all the dependences to the old `download` files
- I replaced all the download + extract calls by calls to `cached_path`
- I removed unused parameters (extract_dir, compute_stats) (maybe compute_stats could be re-added later if we want to compute stats...)
- I left some functions unimplemented for now. We will probably have to implement them because they are used by some datasets scripts (download_kaggle_data, iter_archive) or because we may need them at some point (download_checksums, _record_sizes_checksums)
Let me know if you think that this is going the right direction or if you have remarks.
Note: I didn't write any test yet as I wanted to read your remarks first
|
closed
|
https://github.com/huggingface/datasets/pull/16
| 2020-04-23T16:08:07 | 2021-05-05T18:25:24 | 2020-04-25T21:25:10 |
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true |
[] |
604,906,708 | 15 |
[Tests] General Test Design for all dataset scripts
|
The general idea is similar to how testing is done in `transformers`. There is one general `test_dataset_common.py` file which has a `DatasetTesterMixin` class. This class implements all of the logic that can be used in a generic way for all dataset classes. The idea is to keep each individual dataset test file as minimal as possible.
In order to test whether the specific data set class can download the data and generate the examples **without** downloading the actual data all the time, a MockDataLoaderManager class is used which receives a `mock_folder_structure_fn` function from each individual dataset test file that create "fake" data and which returns the same folder structure that would have been created when using the real data downloader.
|
closed
|
https://github.com/huggingface/datasets/pull/15
| 2020-04-22T16:46:01 | 2022-10-04T09:31:54 | 2020-04-27T14:48:02 |
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true |
[] |
604,761,315 | 14 |
[Download] Only create dir if not already exist
|
This was quite annoying to find out :D.
Some datasets have save in the same directory. So we should only create a new directory if it doesn't already exist.
|
closed
|
https://github.com/huggingface/datasets/pull/14
| 2020-04-22T13:32:51 | 2022-10-04T09:31:50 | 2020-04-23T08:27:33 |
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true |
[] |
604,547,951 | 13 |
[Make style]
|
Added Makefile and applied make style to all.
make style runs the following code:
```
style:
black --line-length 119 --target-version py35 src
isort --recursive src
```
It's the same code that is run in `transformers`.
|
closed
|
https://github.com/huggingface/datasets/pull/13
| 2020-04-22T08:10:06 | 2024-11-20T13:42:58 | 2020-04-23T13:02:22 |
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true |
[] |
604,518,583 | 12 |
[Map Function] add assert statement if map function does not return dict or None
|
IMO, if a function is provided that is not a print statement (-> returns variable of type `None`) or a function that updates the datasets (-> returns variable of type `dict`), then a `TypeError` should be raised.
Not sure whether you had cases in mind where the user should do something else @thomwolf , but I think a lot of silent errors can be avoided with this assert statement.
|
closed
|
https://github.com/huggingface/datasets/pull/12
| 2020-04-22T07:21:24 | 2022-10-04T09:31:53 | 2020-04-24T06:29:03 |
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true |
[] |
603,921,624 | 11 |
[Convert TFDS to HFDS] Extend script to also allow just converting a single file
|
Adds another argument to be able to convert only a single file
|
closed
|
https://github.com/huggingface/datasets/pull/11
| 2020-04-21T11:25:33 | 2022-10-04T09:31:46 | 2020-04-21T20:47:00 |
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true |
[] |
603,909,327 | 10 |
Name json file "squad.json" instead of "squad.py.json"
|
closed
|
https://github.com/huggingface/datasets/pull/10
| 2020-04-21T11:04:28 | 2022-10-04T09:31:44 | 2020-04-21T20:48:06 |
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true |
[] |
|
603,894,874 | 9 |
[Clean up] Datasets
|
Clean up `nlp/datasets` folder.
As I understood, eventually the `nlp/datasets` shall not exist anymore at all.
The folder `nlp/datasets/nlp` is kept for the moment, but won't be needed in the future, since it will live on S3 (actually it already does) at: `https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/?region=us-east-1` and the different `dataset downloader scripts will be added to `nlp/src/nlp` when downloaded by the user.
The folder `nlp/datasets/checksums` is kept for now, but won't be needed anymore in the future.
The remaining folders/ files are leftovers from tensorflow-datasets and are not needed. The can be looked up in the private tensorflow-dataset repo.
|
closed
|
https://github.com/huggingface/datasets/pull/9
| 2020-04-21T10:39:56 | 2022-10-04T09:31:42 | 2020-04-21T20:49:58 |
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true |
[] |
601,783,243 | 8 |
Fix issue 6: error when the citation is missing in the DatasetInfo
|
closed
|
https://github.com/huggingface/datasets/pull/8
| 2020-04-17T08:04:26 | 2020-04-29T09:27:11 | 2020-04-20T13:24:12 |
{
"login": "jplu",
"id": 959590,
"type": "User"
}
|
[] | true |
[] |
|
601,780,534 | 7 |
Fix issue 5: allow empty datasets
|
closed
|
https://github.com/huggingface/datasets/pull/7
| 2020-04-17T07:59:56 | 2020-04-29T09:27:13 | 2020-04-20T13:23:48 |
{
"login": "jplu",
"id": 959590,
"type": "User"
}
|
[] | true |
[] |
|
600,330,836 | 6 |
Error when citation is not given in the DatasetInfo
|
The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__
citation_pprint = _indent('"""{}"""'.format(self.citation.strip()))
AttributeError: 'NoneType' object has no attribute 'strip'
```
I propose to do the following change in the `info.py` file. The method:
```python
def __repr__(self):
splits_pprint = _indent("\n".join(["{"] + [
" '{}': {},".format(k, split.num_examples)
for k, split in sorted(self.splits.items())
] + ["}"]))
features_pprint = _indent(repr(self.features))
citation_pprint = _indent('"""{}"""'.format(self.citation.strip()))
return INFO_STR.format(
name=self.name,
version=self.version,
description=self.description,
total_num_examples=self.splits.total_num_examples,
features=features_pprint,
splits=splits_pprint,
citation=citation_pprint,
homepage=self.homepage,
supervised_keys=self.supervised_keys,
# Proto add a \n that we strip.
license=str(self.license).strip())
```
Becomes:
```python
def __repr__(self):
splits_pprint = _indent("\n".join(["{"] + [
" '{}': {},".format(k, split.num_examples)
for k, split in sorted(self.splits.items())
] + ["}"]))
features_pprint = _indent(repr(self.features))
## the strip is done only is the citation is given
citation_pprint = self.citation
if self.citation:
citation_pprint = _indent('"""{}"""'.format(self.citation.strip()))
return INFO_STR.format(
name=self.name,
version=self.version,
description=self.description,
total_num_examples=self.splits.total_num_examples,
features=features_pprint,
splits=splits_pprint,
citation=citation_pprint,
homepage=self.homepage,
supervised_keys=self.supervised_keys,
# Proto add a \n that we strip.
license=str(self.license).strip())
```
And now it is ok. @thomwolf are you ok with this fix?
|
closed
|
https://github.com/huggingface/datasets/issues/6
| 2020-04-15T14:14:54 | 2020-04-29T09:23:22 | 2020-04-29T09:23:22 |
{
"login": "jplu",
"id": 959590,
"type": "User"
}
|
[] | false |
[] |
600,295,889 | 5 |
ValueError when a split is empty
|
When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs)
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 587, in as_dataset
datasets = utils.map_nested(build_single_dataset, split, map_tuple=True)
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in map_nested
for k, v in data_struct.items()
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in <dictcomp>
for k, v in data_struct.items()
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 601, in _build_single_dataset
split=split,
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 625, in _as_dataset
split_infos=self.info.splits.values(),
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 200, in read
return py_utils.map_nested(_read_instruction_to_ds, instructions)
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 191, in _read_instruction_to_ds
file_instructions = make_file_instructions(name, split_infos, instruction)
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 104, in make_file_instructions
absolute_instructions=absolute_instructions,
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 122, in _make_file_instructions_from_absolutes
'Split empty. This might means that dataset hasn\'t been generated '
ValueError: Split empty. This might means that dataset hasn't been generated yet and info not restored from GCS, or that legacy dataset is used.
```
How to reproduce:
```python
import csv
import nlp
class Bbc(nlp.GeneratorBasedBuilder):
VERSION = nlp.Version("1.0.0")
def __init__(self, **config):
self.train = config.pop("train", None)
self.validation = config.pop("validation", None)
super(Bbc, self).__init__(**config)
def _info(self):
return nlp.DatasetInfo(builder=self, description="bla", features=nlp.features.FeaturesDict({"id": nlp.int32, "text": nlp.string, "label": nlp.string}))
def _split_generators(self, dl_manager):
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": self.train}),
nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": self.validation}),
nlp.SplitGenerator(name=nlp.Split.TEST, gen_kwargs={"filepath": None})]
def _generate_examples(self, filepath):
if not filepath:
return None, {}
with open(filepath) as f:
reader = csv.reader(f, delimiter=',', quotechar="\"")
lines = list(reader)[1:]
for idx, line in enumerate(lines):
yield idx, {"id": idx, "text": line[1], "label": line[0]}
```
```python
import nlp
dataset = nlp.load("bbc", builder_kwargs={"train": "bbc/data/train.csv", "validation": "bbc/data/test.csv"})
```
|
closed
|
https://github.com/huggingface/datasets/issues/5
| 2020-04-15T13:25:13 | 2020-04-29T09:23:05 | 2020-04-29T09:23:05 |
{
"login": "jplu",
"id": 959590,
"type": "User"
}
|
[] | false |
[] |
600,185,417 | 4 |
[Feature] Keep the list of labels of a dataset as metadata
|
It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata.
|
closed
|
https://github.com/huggingface/datasets/issues/4
| 2020-04-15T10:17:10 | 2020-07-08T16:59:46 | 2020-05-04T06:11:57 |
{
"login": "jplu",
"id": 959590,
"type": "User"
}
|
[] | false |
[] |
600,180,050 | 3 |
[Feature] More dataset outputs
|
Add the following dataset outputs:
- Spark
- Pandas
|
closed
|
https://github.com/huggingface/datasets/issues/3
| 2020-04-15T10:08:14 | 2020-05-04T06:12:27 | 2020-05-04T06:12:27 |
{
"login": "jplu",
"id": 959590,
"type": "User"
}
|
[] | false |
[] |
599,767,671 | 2 |
Issue to read a local dataset
|
Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
def __init__(self, **kwargs):
super(BbcConfig, self).__init__(**kwargs)
class Bbc(nlp.GeneratorBasedBuilder):
_DIR = "./data"
_DEV_FILE = "test.csv"
_TRAINING_FILE = "train.csv"
BUILDER_CONFIGS = [BbcConfig(name="bbc", version=nlp.Version("1.0.0"))]
def _info(self):
return nlp.DatasetInfo(builder=self, features=nlp.features.FeaturesDict({"id": nlp.string, "text": nlp.string, "label": nlp.string}))
def _split_generators(self, dl_manager):
files = {"train": os.path.join(self._DIR, self._TRAINING_FILE), "dev": os.path.join(self._DIR, self._DEV_FILE)}
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": files["train"]}),
nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": files["dev"]})]
def _generate_examples(self, filepath):
with open(filepath) as f:
reader = csv.reader(f, delimiter=',', quotechar="\"")
lines = list(reader)[1:]
for idx, line in enumerate(lines):
yield idx, {"idx": idx, "text": line[1], "label": line[0]}
```
The dataset is attached to this issue as well:
[data.zip](https://github.com/huggingface/datasets/files/4476928/data.zip)
Now the steps to reproduce what I would like to do:
1. unzip data locally (I know the nlp lib can detect and extract archives but I want to reduce and facilitate the reproduction as much as possible)
2. create the `bbc.py` script as above at the same location than the unziped `data` folder.
Now I try to load the dataset in three different ways and none works, the first one with the name of the dataset like I would do with TFDS:
```python
import nlp
from bbc import Bbc
dataset = nlp.load("bbc")
```
I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load
dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder
builder_cls = load_dataset(path, name=name, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 88, in load_dataset
local_files_only=local_files_only,
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/utils/file_utils.py", line 214, in cached_path
if not is_zipfile(output_path) and not tarfile.is_tarfile(output_path):
File "/opt/anaconda3/envs/transformers/lib/python3.7/zipfile.py", line 203, in is_zipfile
with open(filename, "rb") as fp:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
But @thomwolf told me that no need to import the script, just put the path of it, then I tried three different way to do:
```python
import nlp
dataset = nlp.load("bbc.py")
```
And
```python
import nlp
dataset = nlp.load("./bbc.py")
```
And
```python
import nlp
dataset = nlp.load("/absolute/path/to/bbc.py")
```
These three ways gives me:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load
dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder
builder_cls = load_dataset(path, name=name, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 124, in load_dataset
dataset_module = importlib.import_module(module_path)
File "/opt/anaconda3/envs/transformers/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'nlp.datasets.2fd72627d92c328b3e9c4a3bf7ec932c48083caca09230cebe4c618da6e93688.bbc'
```
Any idea of what I'm missing? or I might have spot a bug :)
|
closed
|
https://github.com/huggingface/datasets/issues/2
| 2020-04-14T18:18:51 | 2020-05-11T18:55:23 | 2020-05-11T18:55:22 |
{
"login": "jplu",
"id": 959590,
"type": "User"
}
|
[] | false |
[] |
599,457,467 | 1 |
changing nlp.bool to nlp.bool_
|
closed
|
https://github.com/huggingface/datasets/pull/1
| 2020-04-14T10:18:02 | 2022-10-04T09:31:40 | 2020-04-14T12:01:40 |
{
"login": "mariamabarham",
"id": 38249783,
"type": "User"
}
|
[] | true |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.