ExpressiveSpeech / README.md
Linz13
update
8348ba6
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: 'No'
      dtype: int64
    - name: from
      dtype: string
    - name: value
      dtype: string
    - name: emotion
      dtype: string
    - name: length
      dtype: float64
    - name: score_arousal
      dtype: float64
    - name: score_prosody
      dtype: float64
    - name: score_nature
      dtype: float64
    - name: score_expressive
      dtype: float64
    - name: audio-path
      dtype: audio
  splits:
    - name: train
      num_bytes: 4728746481
      num_examples: 28190
  download_size: 12331997848
  dataset_size: 4728746481

ExpressiveSpeech Dataset

Project Webpage

中文版 (Chinese Version)

About The Dataset

ExpressiveSpeech is a high-quality, expressive, and bilingual (Chinese-English) speech dataset created to address the common lack of consistent vocal expressiveness in existing dialogue datasets.

This dataset is meticulously curated from five renowned open-source emotional dialogue datasets: Expresso, NCSSD, M3ED, MultiDialog, and IEMOCAP. Through a rigorous processing and selection pipeline, ExpressiveSpeech ensures that every utterance meets high standards for both acoustic quality and expressive richness. It is designed for tasks in expressive Speech-to-Speech (S2S), Text-to-Speech (TTS), voice conversion, speech emotion recognition, and other fields requiring high-fidelity, emotionally resonant audio.

Key Features

  • High Expressiveness: Achieves a significantly high average expressiveness score of 80.2 by DeEAR, far surpassing the original source datasets.
  • Bilingual Content: Contains a balanced mix of Chinese and English speech, with a language ratio close to 1:1.
  • Substantial Scale: Comprises approximately 14,000 utterances, totaling 51 hours of audio.
  • Rich Metadata: Includes ASR-generated text transcriptions, expressiveness scores, and source information for each utterance.

Dataset Statistics

Metric Value
Total Utterances ~14,000
Total Duration ~51 hours
Languages Chinese, English
Language Ratio (CN:EN) Approx. 1:1
Sampling Rate 16kHz
Avg. Expressiveness Score (DeEAR) 80.2

Our Expressiveness Scoring Tool: DeEAR

The high expressiveness of this dataset was achieved using our screening tool, DeEAR. If you need to build larger batches of high-expressiveness data yourself, you are welcome to use this tool. You can find it on our GitHub.

Data Format

The dataset is organized as follows:

ExpressiveSpeech/
├── audio/
│   ├── M3ED
│   │    ├── audio_00001.wav
│   │    └── ...
│   ├── NCSSD
│   ├── IEMOCAP
│   ├── MultiDialog
│   └── Expresso
└── metadata.jsonl
  • metadata.jsonl: A jsonl file containing detailed information for each utterance. The metadata includes:
    • audio_path: The relative path to the audio file.
    • value: The ASR-generated text transcription.
    • emotion: Emotion labels from the original datasets.
    • expressiveness_scores: The expressiveness score from the DeEAR model.

JSONL Files Example

Each JSONL line contains a conversations field with an array of utterances.
Example:

{"conversations": [{"No": 9, "from": "user", "value": "Yeah.", "emotion": "happy", "length": 2.027, "score_arousal": 0.9931480884552002, "score_prosody": 0.6800634264945984, "score_nature": 0.9687601923942566, "score_expressive": 0.9892677664756775, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/009_speaker1_53s_55s.wav"}, {"No": 10, "from": "assistant", "value": "What was the reason, what was the, why couldn't I get there, ah I forget.", "emotion": "happy", "length": 3.753, "score_arousal": 0.9555678963661194, "score_prosody": 0.6498672962188721, "score_nature": 1.030701756477356, "score_expressive": 0.9965837001800537, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/010_speaker2_55s_59s.wav"}]}
{"conversations": [{"No": 10, "from": "user", "value": "What was the reason, what was the, why couldn't I get there, ah I forget.", "emotion": "happy", "length": 3.753, "score_arousal": 0.9555678963661194, "score_prosody": 0.6498672962188721, "score_nature": 1.030701756477356, "score_expressive": 0.9965837001800537, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/010_speaker2_55s_59s.wav"}, {"No": 11, "from": "assistant", "value": "Because genie really had to go and and to the bathroom and she couldn't find a place to do it and so she when they put the tent on it it was it was a bad mess and they shouldn't have done that.", "emotion": "happy", "length": 10.649, "score_arousal": 0.976757287979126, "score_prosody": 0.7951533794403076, "score_nature": 0.9789049625396729, "score_expressive": 0.919080913066864, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/011_speaker1_58s_69s.wav"}]}

Note: Some source datasets applied VAD, which could split a single utterance into multiple segments. To maintain conversational integrity, we applied rules to merge such segments back into complete utterances.

License

In line with the non-commercial restrictions of its source datasets, the ExpressiveSpeech dataset is released under the CC BY-NC-SA 4.0 license.

You can view the full license here.

Citation

If you use this dataset in your research, please cite our paper:

@article{lin2025decoding,
  title={Decoding the Ear: A Framework for Objectifying Expressiveness from Human Preference Through Efficient Alignment},
  author={Lin, Zhiyu and Yang, Jingwen and Zhao, Jiale and Liu, Meng and Li, Sunzhu and Wang, Benyou},
  journal={arXiv preprint arXiv:2510.20513},
  year={2025}
}