jingxiang11111/mmoe-multimodal-rec
Updated
•
1
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 81, in _split_generators
first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 55, in _get_pipeline_from_tar
current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.9/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
这是一个为端到端多模态推荐系统设计的WebDataset 格式的数据集。该数据集源自 Amazon 评论,包含了处理后的文本(评论)和图像数据,旨在支持高效的分布式训练。
该数据集是 GitHub 项目 的一部分,该项目基于 Apache Beam 和 PyTorch DDP 构建,涵盖了从分布式特征工程、数据加载到复杂模型(MMoE)分布式训练的整个工作流。我们已将所用的数据集、验证集和模型全部开源。
| 子集名称 | 文件格式 | 样本数量 | 文件大小 |
|---|---|---|---|
train |
.tar.gz |
1848930 | 128 GB |
valid |
.tar.gz |
22281 | 2 GB |
由于数据集是 WebDataset 格式,我们推荐使用 WebDataset 库 进行流式加载,这对于分布式训练非常高效。
webdataset 直接加载
你可以直接通过 webdataset 库从 Hugging Face Hub 加载数据。请注意,webdataset 库需要你本地安装。
import webdataset as wds
# 加载训练集中的所有分片
# webdataset库会自动处理从Hugging Face下载文件
train_dataset = wds.WebDataset("[https://huggingface.co/datasets/jingxiang11111/amazon_reviews_for_rec/resolve/main/train/data-*.tar.gz](https://huggingface.co/datasets/jingxiang11111/amazon_reviews_for_rec/resolve/main/train/data-*.tar.gz)")
# 加载验证集中的所有分片
valid_dataset = wds.WebDataset("[https://huggingface.co/datasets/jingxiang11111/amazon_reviews_for_rec/resolve/main/valid/data-*.tar.gz](https://huggingface.co/datasets/jingxiang11111/amazon_reviews_for_rec/resolve/main/valid/data-*.tar.gz)")
# 你也可以通过 resolve/main/{folder}/{filename} 这样的URL来加载单个分片
# single_shard = wds.WebDataset("[https://huggingface.co/datasets/jingxiang11111/amazon_reviews_for_rec/resolve/main/train/data-000000-000000.tar.gz](https://huggingface.co/datasets/jingxiang11111/amazon_reviews_for_rec/resolve/main/train/data-000000-000000.tar.gz)")
# 示例:遍历一个数据集
for sample in train_dataset.decode("pil").to_tuple("jpg", "txt"):
image, text = sample
print(f"Sample Text: {text}")
print(f"Sample Image format: {image.format}")
break