root
commited on
Commit
·
c0800d3
1
Parent(s):
c45f228
update
Browse files
README.md
CHANGED
|
@@ -40,7 +40,7 @@ dataset_info:
|
|
| 40 |
|
| 41 |
## About The Dataset
|
| 42 |
|
| 43 |
-
**ExpressiveSpeech** is a
|
| 44 |
|
| 45 |
This dataset is meticulously curated from five renowned open-source emotional dialogue datasets: Expresso, NCSSD, M3ED, MultiDialog, and IEMOCAP. Through a rigorous processing and selection pipeline, ExpressiveSpeech ensures that every utterance meets high standards for both acoustic quality and expressive richness. It is designed for tasks in expressive Speech-to-Speech (S2S), Text-to-Speech (TTS), voice conversion, speech emotion recognition, and other fields requiring high-fidelity, emotionally resonant audio.
|
| 46 |
|
|
@@ -88,6 +88,18 @@ ExpressiveSpeech/
|
|
| 88 |
- `emotion`: Emotion labels from the original datasets.
|
| 89 |
- `expressiveness_scores`: The expressiveness score from the **DeEAR** model.
|
| 90 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
|
| 92 |
## License
|
| 93 |
|
|
|
|
| 40 |
|
| 41 |
## About The Dataset
|
| 42 |
|
| 43 |
+
**ExpressiveSpeech** is a high-quality, **expressive**, and **bilingual** (Chinese-English) speech dataset created to address the common lack of consistent vocal expressiveness in existing dialogue datasets.
|
| 44 |
|
| 45 |
This dataset is meticulously curated from five renowned open-source emotional dialogue datasets: Expresso, NCSSD, M3ED, MultiDialog, and IEMOCAP. Through a rigorous processing and selection pipeline, ExpressiveSpeech ensures that every utterance meets high standards for both acoustic quality and expressive richness. It is designed for tasks in expressive Speech-to-Speech (S2S), Text-to-Speech (TTS), voice conversion, speech emotion recognition, and other fields requiring high-fidelity, emotionally resonant audio.
|
| 46 |
|
|
|
|
| 88 |
- `emotion`: Emotion labels from the original datasets.
|
| 89 |
- `expressiveness_scores`: The expressiveness score from the **DeEAR** model.
|
| 90 |
|
| 91 |
+
### JSONL Files Example
|
| 92 |
+
|
| 93 |
+
Each JSONL line contains a `conversations` field with an array of utterances.
|
| 94 |
+
Example:
|
| 95 |
+
|
| 96 |
+
```json
|
| 97 |
+
{"conversations": [{"No": 9, "from": "user", "value": "Yeah.", "emotion": "happy", "length": 2.027, "score_arousal": 0.9931480884552002, "score_prosody": 0.6800634264945984, "score_nature": 0.9687601923942566, "score_expressive": 0.9892677664756775, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/009_speaker1_53s_55s.wav"}, {"No": 10, "from": "assistant", "value": "What was the reason, what was the, why couldn't I get there, ah I forget.", "emotion": "happy", "length": 3.753, "score_arousal": 0.9555678963661194, "score_prosody": 0.6498672962188721, "score_nature": 1.030701756477356, "score_expressive": 0.9965837001800537, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/010_speaker2_55s_59s.wav"}]}
|
| 98 |
+
{"conversations": [{"No": 10, "from": "user", "value": "What was the reason, what was the, why couldn't I get there, ah I forget.", "emotion": "happy", "length": 3.753, "score_arousal": 0.9555678963661194, "score_prosody": 0.6498672962188721, "score_nature": 1.030701756477356, "score_expressive": 0.9965837001800537, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/010_speaker2_55s_59s.wav"}, {"No": 11, "from": "assistant", "value": "Because genie really had to go and and to the bathroom and she couldn't find a place to do it and so she when they put the tent on it it was it was a bad mess and they shouldn't have done that.", "emotion": "happy", "length": 10.649, "score_arousal": 0.976757287979126, "score_prosody": 0.7951533794403076, "score_nature": 0.9789049625396729, "score_expressive": 0.919080913066864, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/011_speaker1_58s_69s.wav"}]}
|
| 99 |
+
|
| 100 |
+
|
| 101 |
+
*Note*: Some source datasets applied VAD, which could split a single utterance into multiple segments. To maintain conversational integrity, we applied rules to merge such segments back into complete utterances.
|
| 102 |
+
|
| 103 |
|
| 104 |
## License
|
| 105 |
|