File size: 6,128 Bytes
39ebd66 03d0151 76b8566 bf1c2d7 a953dcc 76b8566 c0800d3 76b8566 29f8ef1 76b8566 12aca92 29f8ef1 76b8566 87ee19d 76b8566 c0800d3 0a37808 c0800d3 76b8566 e84e591 76b8566 8348ba6 76b8566 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: 'No'
dtype: int64
- name: from
dtype: string
- name: value
dtype: string
- name: emotion
dtype: string
- name: length
dtype: float64
- name: score_arousal
dtype: float64
- name: score_prosody
dtype: float64
- name: score_nature
dtype: float64
- name: score_expressive
dtype: float64
- name: audio-path
dtype: audio
splits:
- name: train
num_bytes: 4728746481
num_examples: 28190
download_size: 12331997848
dataset_size: 4728746481
---
# ExpressiveSpeech Dataset
[**Project Webpage**](https://freedomintelligence.github.io/ExpressiveSpeech/)
[**中文版 (Chinese Version)**](./README_zh.md)
## About The Dataset
**ExpressiveSpeech** is a high-quality, **expressive**, and **bilingual** (Chinese-English) speech dataset created to address the common lack of consistent vocal expressiveness in existing dialogue datasets.
This dataset is meticulously curated from five renowned open-source emotional dialogue datasets: Expresso, NCSSD, M3ED, MultiDialog, and IEMOCAP. Through a rigorous processing and selection pipeline, ExpressiveSpeech ensures that every utterance meets high standards for both acoustic quality and expressive richness. It is designed for tasks in expressive Speech-to-Speech (S2S), Text-to-Speech (TTS), voice conversion, speech emotion recognition, and other fields requiring high-fidelity, emotionally resonant audio.
## Key Features
- **High Expressiveness**: Achieves a significantly high average expressiveness score of **80.2** by **DeEAR**, far surpassing the original source datasets.
- **Bilingual Content**: Contains a balanced mix of Chinese and English speech, with a language ratio close to **1:1**.
- **Substantial Scale**: Comprises approximately **14,000 utterances**, totaling **51 hours** of audio.
- **Rich Metadata**: Includes ASR-generated text transcriptions, expressiveness scores, and source information for each utterance.
## Dataset Statistics
| Metric | Value |
| :--- | :--- |
| Total Utterances | ~14,000 |
| Total Duration | ~51 hours |
| Languages | Chinese, English |
| Language Ratio (CN:EN) | Approx. 1:1 |
| Sampling Rate | 16kHz |
| Avg. Expressiveness Score (DeEAR) | 80.2 |
## Our Expressiveness Scoring Tool: DeEAR
The high expressiveness of this dataset was achieved using our screening tool, **DeEAR**. If you need to build larger batches of high-expressiveness data yourself, you are welcome to use this tool. You can find it on our [GitHub](https://github.com/FreedomIntelligence/ExpressiveSpeech).
## Data Format
The dataset is organized as follows:
```
ExpressiveSpeech/
├── audio/
│ ├── M3ED
│ │ ├── audio_00001.wav
│ │ └── ...
│ ├── NCSSD
│ ├── IEMOCAP
│ ├── MultiDialog
│ └── Expresso
└── metadata.jsonl
```
- **`metadata.jsonl`**: A jsonl file containing detailed information for each utterance. The metadata includes:
- `audio_path`: The relative path to the audio file.
- `value`: The ASR-generated text transcription.
- `emotion`: Emotion labels from the original datasets.
- `expressiveness_scores`: The expressiveness score from the **DeEAR** model.
### JSONL Files Example
Each JSONL line contains a `conversations` field with an array of utterances.
Example:
```json
{"conversations": [{"No": 9, "from": "user", "value": "Yeah.", "emotion": "happy", "length": 2.027, "score_arousal": 0.9931480884552002, "score_prosody": 0.6800634264945984, "score_nature": 0.9687601923942566, "score_expressive": 0.9892677664756775, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/009_speaker1_53s_55s.wav"}, {"No": 10, "from": "assistant", "value": "What was the reason, what was the, why couldn't I get there, ah I forget.", "emotion": "happy", "length": 3.753, "score_arousal": 0.9555678963661194, "score_prosody": 0.6498672962188721, "score_nature": 1.030701756477356, "score_expressive": 0.9965837001800537, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/010_speaker2_55s_59s.wav"}]}
{"conversations": [{"No": 10, "from": "user", "value": "What was the reason, what was the, why couldn't I get there, ah I forget.", "emotion": "happy", "length": 3.753, "score_arousal": 0.9555678963661194, "score_prosody": 0.6498672962188721, "score_nature": 1.030701756477356, "score_expressive": 0.9965837001800537, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/010_speaker2_55s_59s.wav"}, {"No": 11, "from": "assistant", "value": "Because genie really had to go and and to the bathroom and she couldn't find a place to do it and so she when they put the tent on it it was it was a bad mess and they shouldn't have done that.", "emotion": "happy", "length": 10.649, "score_arousal": 0.976757287979126, "score_prosody": 0.7951533794403076, "score_nature": 0.9789049625396729, "score_expressive": 0.919080913066864, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/011_speaker1_58s_69s.wav"}]}
```
*Note*: Some source datasets applied VAD, which could split a single utterance into multiple segments. To maintain conversational integrity, we applied rules to merge such segments back into complete utterances.
## License
In line with the non-commercial restrictions of its source datasets, the ExpressiveSpeech dataset is released under the CC BY-NC-SA 4.0 license.
You can view the full license [here](https://creativecommons.org/licenses/by-nc-sa/4.0/).
## Citation
If you use this dataset in your research, please cite our paper:
```bibtex
@article{lin2025decoding,
title={Decoding the Ear: A Framework for Objectifying Expressiveness from Human Preference Through Efficient Alignment},
author={Lin, Zhiyu and Yang, Jingwen and Zhao, Jiale and Liu, Meng and Li, Sunzhu and Wang, Benyou},
journal={arXiv preprint arXiv:2510.20513},
year={2025}
}
``` |