Datasets:

Languages:
English
ArXiv:
License:
Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 3 new columns ({'state', 'intents', 'dialogue_acts'}) and 5 missing columns ({'original_id', 'turns', 'data_split', 'dialogue_id', 'dataset'}).

This happened while the json dataset builder was generating data using

zip://data/ontology.json::/tmp/hf-datasets-cache/medium/datasets/30258037535660-config-parquet-and-info-ConvLab-tm1-f7f3e157/downloads/4ccb0e66b186d49d6b4cfa3bf6fdea4eac3e5782c2dffe0aa50925e206b733d4

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              domains: struct<uber_lyft: struct<description: string, slots: struct<location.from: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, location.to: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, type.ride: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, num.people: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, price.estimate: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, duration.estimate: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, time.pickup: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, time.dropoff: struct<description: string, is_categorical: bool, possible_values: list<item: null>>>>, movie_ticket: struct<description: string, slots: struct<name.movie: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, name.theater: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, num.tickets: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, time.start: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, location.theater: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, price.ticket: struct<description: string, is_categorical: bool, possible_
              ...
              ng, num.drink: string, type.milk: string, preference: string>
                    child 0, location.store: string
                    child 1, name.drink: string
                    child 2, size.drink: string
                    child 3, num.drink: string
                    child 4, type.milk: string
                    child 5, preference: string
                child 4, pizza_ordering: struct<name.store: string, name.pizza: string, size.pizza: string, type.topping: string, type.crust: string, preference: string, location.store: string>
                    child 0, name.store: string
                    child 1, name.pizza: string
                    child 2, size.pizza: string
                    child 3, type.topping: string
                    child 4, type.crust: string
                    child 5, preference: string
                    child 6, location.store: string
                child 5, auto_repair: struct<name.store: string, name.customer: string, date.appt: string, time.appt: string, reason.appt: string, name.vehicle: string, year.vehicle: string, location.store: string>
                    child 0, name.store: string
                    child 1, name.customer: string
                    child 2, date.appt: string
                    child 3, time.appt: string
                    child 4, reason.appt: string
                    child 5, name.vehicle: string
                    child 6, year.vehicle: string
                    child 7, location.store: string
              dialogue_acts: struct<categorical: list<item: null>, non-categorical: list<item: string>, binary: list<item: string>>
                child 0, categorical: list<item: null>
                    child 0, item: null
                child 1, non-categorical: list<item: string>
                    child 0, item: string
                child 2, binary: list<item: string>
                    child 0, item: string
              to
              {'original_id': Value(dtype='string', id=None), 'turns': [{'dialogue_acts': {'binary': [{'domain': Value(dtype='string', id=None), 'intent': Value(dtype='string', id=None), 'slot': Value(dtype='string', id=None)}], 'categorical': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'non-categorical': [{'domain': Value(dtype='string', id=None), 'end': Value(dtype='int64', id=None), 'intent': Value(dtype='string', id=None), 'slot': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'value': Value(dtype='string', id=None)}]}, 'speaker': Value(dtype='string', id=None), 'state': {'auto_repair': {'date.appt': Value(dtype='string', id=None), 'location.store': Value(dtype='string', id=None), 'name.customer': Value(dtype='string', id=None), 'name.store': Value(dtype='string', id=None), 'name.vehicle': Value(dtype='string', id=None), 'reason.appt': Value(dtype='string', id=None), 'time.appt': Value(dtype='string', id=None), 'year.vehicle': Value(dtype='string', id=None)}, 'coffee_ordering': {'location.store': Value(dtype='string', id=None), 'name.drink': Value(dtype='string', id=None), 'num.drink': Value(dtype='string', id=None), 'preference': Value(dtype='string', id=None), 'size.drink': Value(dtype='string', id=None), 'type.milk': Value(dtype='string', id=None)}, 'movie_ticket': {'location.theater': Value(dtype='string', id=None), 'name.movie': Value(dtype='string', id=None), 'name.theater': Value(dtype='string', id=None), 'num.tickets': Value(dtyp
              ...
              ring', id=None), 'time.start': Value(dtype='string', id=None), 'type.screening': Value(dtype='string', id=None)}, 'pizza_ordering': {'location.store': Value(dtype='string', id=None), 'name.pizza': Value(dtype='string', id=None), 'name.store': Value(dtype='string', id=None), 'preference': Value(dtype='string', id=None), 'size.pizza': Value(dtype='string', id=None), 'type.crust': Value(dtype='string', id=None), 'type.topping': Value(dtype='string', id=None)}, 'restaurant_reservation': {'location.restaurant': Value(dtype='string', id=None), 'name.reservation': Value(dtype='string', id=None), 'name.restaurant': Value(dtype='string', id=None), 'num.guests': Value(dtype='string', id=None), 'time.reservation': Value(dtype='string', id=None), 'type.seating': Value(dtype='string', id=None)}, 'uber_lyft': {'duration.estimate': Value(dtype='string', id=None), 'location.from': Value(dtype='string', id=None), 'location.to': Value(dtype='string', id=None), 'num.people': Value(dtype='string', id=None), 'price.estimate': Value(dtype='string', id=None), 'time.dropoff': Value(dtype='string', id=None), 'time.pickup': Value(dtype='string', id=None), 'type.ride': Value(dtype='string', id=None)}}, 'utt_idx': Value(dtype='int64', id=None), 'utterance': Value(dtype='string', id=None)}], 'data_split': Value(dtype='string', id=None), 'dialogue_id': Value(dtype='string', id=None), 'domains': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'dataset': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 3 new columns ({'state', 'intents', 'dialogue_acts'}) and 5 missing columns ({'original_id', 'turns', 'data_split', 'dialogue_id', 'dataset'}).
              
              This happened while the json dataset builder was generating data using
              
              zip://data/ontology.json::/tmp/hf-datasets-cache/medium/datasets/30258037535660-config-parquet-and-info-ConvLab-tm1-f7f3e157/downloads/4ccb0e66b186d49d6b4cfa3bf6fdea4eac3e5782c2dffe0aa50925e206b733d4
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

turns
list
dialogue_id
string
original_id
string
dataset
string
domains
sequence
data_split
string
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "movie_ticket", "end": 49, "intent": "inform", "slot": "name.movie", "start": 47, "value": "Us" } ] }, "speaker": "us...
tm1-train-0
dlg-3369f6e3-6c81-4902-8259-138ffd830952
tm1
[ "movie_ticket" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": null, "coffee_ordering": null, "movie_ticket": null, "pizza_ordering": null, "restaurant_reservation": { "location.resta...
tm1-train-1
dlg-336c8165-068e-4b4b-803d-18ef0676f668
tm1
[ "restaurant_reservation" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": { "date.appt": "", "location.store": "", "name.customer": "", "name.store": "", "name.vehicle": "", "rea...
tm1-train-2
dlg-3370fcc4-8914-434d-994d-9e741c0707b2
tm1
[ "auto_repair" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": null, "coffee_ordering": { "location.store": "", "name.drink": "", "num.drink": "", "preference": "", "siz...
tm1-train-3
dlg-33769877-7168-4b1d-b056-9f2df7b7ede3
tm1
[ "coffee_ordering" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "pizza_ordering", "end": 36, "intent": "inform", "slot": "name.store", "start": 28, "value": "Domino's" } ] }, "speak...
tm1-train-4
dlg-33796d43-da7a-41df-98e1-6d47c5f8f20e
tm1
[ "pizza_ordering" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "pizza_ordering", "end": 20, "intent": "inform", "slot": "size.pizza", "start": 15, "value": "large" } ] }, "speaker"...
tm1-train-5
dlg-3388f38a-7ebd-4d73-9700-a34cea212f5a
tm1
[ "pizza_ordering" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": null, "coffee_ordering": null, "movie_ticket": null, "pizza_ordering": null, "restaurant_reservation": { "location.resta...
tm1-train-6
dlg-338edd6c-5fbe-4498-bce1-b7360bac2160
tm1
[ "restaurant_reservation" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": null, "coffee_ordering": null, "movie_ticket": null, "pizza_ordering": null, "restaurant_reservation": null, "uber_lyft": ...
tm1-train-7
dlg-3392e3ff-40b6-4004-a2ec-63ec0d557dfc
tm1
[ "uber_lyft" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": null, "coffee_ordering": null, "movie_ticket": null, "pizza_ordering": null, "restaurant_reservation": { "location.resta...
tm1-train-8
dlg-3393df32-6c63-4569-b0f6-3e2f8e19852e
tm1
[ "restaurant_reservation" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "movie_ticket", "end": 47, "intent": "inform", "slot": "name.movie", "start": 28, "value": "Alita: Battle Angel" } ] }, ...
tm1-train-9
dlg-339dfcb2-714f-4b53-95ff-8aa1bf43d12a
tm1
[ "movie_ticket" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": { "date.appt": "", "location.store": "", "name.customer": "", "name.store": "", "name.vehicle": "", "rea...
tm1-train-10
dlg-33a0ae76-0781-4da9-b039-bbcb8eea5778
tm1
[ "auto_repair" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": { "date.appt": "", "location.store": "", "name.customer": "", "name.store": "", "name.vehicle": "", "rea...
tm1-train-11
dlg-33ade78b-0950-4450-bdfb-a1cbdc75519c
tm1
[ "auto_repair" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "pizza_ordering", "end": 48, "intent": "accept", "slot": "name.store", "start": 43, "value": "Blaze" } ] }, "speaker"...
tm1-train-12
dlg-33c70ed5-6abf-42e7-8708-da237436d09f
tm1
[ "pizza_ordering" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": null, "coffee_ordering": { "location.store": "", "name.drink": "", "num.drink": "", "preference": "", "siz...
tm1-train-13
dlg-33c7c683-74ce-4e0e-a8c0-00f9f42f1d82
tm1
[ "coffee_ordering" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": null, "coffee_ordering": null, "movie_ticket": null, "pizza_ordering": null, "restaurant_reservation": null, "uber_lyft": ...
tm1-train-14
dlg-33cbfb13-a273-4e84-9a44-70f7db71d2c8
tm1
[ "uber_lyft" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "uber_lyft", "end": 47, "intent": "inform", "slot": "type.ride", "start": 38, "value": "Uber ride" } ] }, "speaker": ...
tm1-train-15
dlg-33d6309a-6ffb-408f-8ad5-0f1a69c9a5c9
tm1
[ "uber_lyft" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": null, "coffee_ordering": { "location.store": "", "name.drink": "", "num.drink": "", "preference": "", "siz...
tm1-train-16
dlg-33d75ebd-52b9-482f-838a-56843faeeb8a
tm1
[ "coffee_ordering" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "movie_ticket", "end": 61, "intent": "inform", "slot": "name.movie", "start": 41, "value": "Mary Poppins Returns" } ] }, ...
tm1-train-17
dlg-33d9320f-a670-4207-9c98-c8a4f744a42c
tm1
[ "movie_ticket" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": null, "coffee_ordering": { "location.store": "", "name.drink": "", "num.drink": "", "preference": "", "siz...
tm1-train-18
dlg-33e0d4cc-c41f-4435-a13f-88adeb9b5d91
tm1
[ "coffee_ordering" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "movie_ticket", "end": 41, "intent": "inform", "slot": "time.start", "start": 34, "value": "tonight" } ] }, "speaker"...
tm1-train-19
dlg-33e64e5a-a516-471e-85d2-14ffb61d07de
tm1
[ "movie_ticket" ]
train
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "auto_repair": null, "coffee_ordering": { "location.store": "", "name.drink": "", "num.drink": "", "preference": "", "siz...
tm1-train-20
dlg-33e6957c-a1f0-4540-a8e8-6a4e7c5fdda9
tm1
[ "coffee_ordering" ]
train
End of preview.
YAML Metadata Warning: The task_categories "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Dataset Card for Taskmaster-1

To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via:

from convlab.util import load_dataset, load_ontology, load_database

dataset = load_dataset('tm1')
ontology = load_ontology('tm1')
database = load_database('tm1')

For more usage please refer to here.

Dataset Summary

The original dataset consists of 13,215 task-based dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations.

  • How to get the transformed data from original data:
    • Download master.zip.
    • Run python preprocess.py in the current directory.
  • Main changes of the transformation:
    • Remove dialogs that are empty or only contain one speaker.
    • Split woz-dialogs into train/validation/test randomly (8:1:1). The split of self-dialogs is followed the original dataset.
    • Merge continuous turns by the same speaker (ignore repeated turns).
    • Annotate dialogue acts according to the original segment annotations. Add intent annotation (inform/accept/reject). The type of dialogue act is set to non-categorical if the original segment annotation includes a specified slot. Otherwise, the type is set to binary (and the slot and value are empty) since it means general reference to a transaction, e.g. "OK your pizza has been ordered". If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation.
    • Add domain, intent, and slot descriptions.
    • Add state by accumulate non-categorical dialogue acts in the order that they appear, except those whose intents are reject.
    • Keep the first annotation since each conversation was annotated by two workers.
  • Annotations:
    • dialogue acts, state.

Supported Tasks and Leaderboards

NLU, DST, Policy, NLG

Languages

English

Data Splits

split dialogues utterances avg_utt avg_tokens avg_domains cat slot match(state) cat slot match(goal) cat slot match(dialogue act) non-cat slot span(dialogue act)
train 10535 223322 21.2 8.75 1 - - - 100
validation 1318 27903 21.17 8.75 1 - - - 100
test 1322 27660 20.92 8.87 1 - - - 100
all 13175 278885 21.17 8.76 1 - - - 100

6 domains: ['uber_lyft', 'movie_ticket', 'restaurant_reservation', 'coffee_ordering', 'pizza_ordering', 'auto_repair']

  • cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.
  • non-cat slot span: how many values of non-categorical slots have span annotation in percentage.

Citation

@inproceedings{byrne-etal-2019-taskmaster,
  title = {Taskmaster-1:Toward a Realistic and Diverse Dialog Dataset},
  author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik},
  booktitle = {2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing},
  address = {Hong Kong}, 
  year = {2019} 
}

Licensing Information

CC BY 4.0

Downloads last month
50

Models trained or fine-tuned on ConvLab/tm1

Paper for ConvLab/tm1