Urdu-ONYX-WAV
Urdu-ONYX-WAV is a high-quality Urdu Text-to-Speech (TTS) dataset consisting of audio recordings and corresponding transcripts. This dataset has been specifically prepared for training TTS models and conducting research in Urdu speech synthesis.
π Dataset Structure
This dataset is distributed across multiple parts due to size constraints:
- Main repository: Base dataset with initial samples
- part2: Additional 2.56 GB of audio data (6 Arrow files)
- part3:
- part N:
Combined Dataset Statistics
- Total Size: ~3+ GB (across all parts)
- Format: Apache Arrow (.arrow files)
- Audio Format: WAV, 22.05 kHz, 16-bit PCM
- Number of Samples: 100,000+ audio-transcript pairs
Data Fields
id: Unique identifier for each sampletranscript: Textual transcription of the audio (Urdu script)voice: Speaker identity or voice labeltext: Same as transcript (for TTS training convenience)timestamp: Recording timestamp (optional)audio: Audio data stored as bytes with metadata
π Usage
Loading the Complete Dataset
To load all parts of the dataset:
from datasets import load_dataset, concatenate_datasets
# Load main dataset
main_dataset = load_dataset("humair025/Urdu-ONYX-WAV")
# Load additional parts
part2 = load_dataset("humair025/Urdu-ONYX-WAV", data_dir="part2")
# part3 = load_dataset("humair025/Urdu-ONYX-WAV", data_dir="part3")
# Add more parts as they become available
# Combine all parts
full_dataset = concatenate_datasets([
main_dataset['train'],
part2['train'],
# part3['train'],
])
print(f"Total samples: {len(full_dataset)}")
Loading a Single Part
from datasets import load_dataset
# Load only part2
dataset = load_dataset("humair025/Urdu-ONYX-WAV", data_dir="part2")
print(dataset)
Accessing Audio Data
from datasets import load_dataset
import soundfile as sf
dataset = load_dataset("humair025/Urdu-ONYX-WAV")
# Access first sample
sample = dataset['train'][0]
print(f"Transcript: {sample['transcript']}")
print(f"Audio shape: {sample['audio']['array'].shape}")
print(f"Sample rate: {sample['audio']['sampling_rate']}")
# Save audio to file
sf.write('output.wav', sample['audio']['array'], sample['audio']['sampling_rate'])
π Citation Notice
β οΈ MANDATORY CITATION REQUIRED
You MUST cite this dataset in any publication, project, presentation, or derivative work β regardless of scope or scale. Proper attribution is a legal and ethical requirement under the modified Apache 2.0 license.
BibTeX Format
@misc{munir2025urduonyxwav,
author = {Humair Munir},
title = {Urdu-ONYX-WAV: A High-Quality Urdu Text-to-Speech Dataset},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/humair025/Urdu-ONYX-WAV}},
note = {Multi-part dataset for Urdu speech synthesis}
}
APA Format
Munir, H. (2025). Urdu-ONYX-WAV: A High-Quality Urdu Text-to-Speech Dataset
[Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/Urdu-ONYX-WAV
π License
This dataset is released under a Modified Apache License 2.0 with mandatory attribution requirements.
Key Terms
β Permitted Uses:
- Academic research and publications
- Commercial applications and products
- Personal projects and experimentation
- Model training and benchmarking
- Modification and redistribution
βοΈ Requirements:
- Citation is mandatory for any use
- Redistributions must include this README and license notice
- Modified versions must be clearly marked as such
- Attribution must be visible in papers, products, and documentation
π« Restrictions:
- No warranty or guarantee is provided
- Author not liable for misuse or consequences
- Users responsible for legal compliance in their jurisdiction
π Full License:
- Base license: Apache 2.0
- Additional terms: Mandatory citation requirement
βοΈ Legal Notice
Content Disclaimer
- This dataset may contain synthetic audio, modified recordings, or human-recorded content
- The dataset is provided "AS-IS" without warranties of any kind
- Users assume all responsibility for:
- Compliance with local laws and regulations
- Proper use in accordance with ethical guidelines
- Verification of content accuracy and quality
Redistribution Requirements
- All redistributions must include:
- This complete README file
- License notice and citation requirements
- Acknowledgment of the original source
Liability Waiver
The dataset creator (Humair Munir) shall not be held liable for:
- Any damages arising from dataset use or misuse
- Errors, inaccuracies, or defects in the data
- Consequences of derivative works
- Legal issues arising from improper use
π― Recommended Applications
Primary Use Cases
- TTS Model Training: Tacotron2, VITS, FastSpeech2, Glow-TTS , StyleTTS2
- Speech Recognition: ASR model development and testing
- Voice Cloning: Speaker adaptation and voice conversion
- Linguistic Research: Urdu phonetics and prosody studies
Research Areas
- Low-resource language speech synthesis
- Multilingual TTS systems
- Speech quality assessment
- Urdu language processing
Benchmarking
- TTS model evaluation
- Speech synthesis quality comparison
- Cross-lingual transfer learning studies
π§ Technical Specifications
Audio Properties
- Sample Rate: 22,050 Hz
- Bit Depth: 16-bit PCM
- Channels: Mono
- Format: WAV (uncompressed)
Data Format
- Storage: Apache Arrow format (.arrow files)
- Compression: Xet-optimized for efficient transfer
- Splitting: Multiple parts for manageable upload/download
Part Breakdown
| Part | Size | Files | Status |
|---|---|---|---|
| Main | 550 MB | 2 Arrow files | β Available |
| part2 | 2.56 GB | 6 Arrow files | β Available |
and so on
π§ Contact & Contributions
Creator
Humair Munir
- Hugging Face: @humair025
- Dataset: Urdu-ONYX-WAV
Contributions
We welcome:
- Bug reports and corrections
- Quality improvement suggestions
- Additional Urdu speech data contributions
- Collaboration on Urdu TTS research
Please open an issue or discussion on the Hugging Face repository.
π Acknowledgments
This dataset was created to support the development of high-quality Urdu speech synthesis systems and to contribute to low-resource language research.
π Related Resources
Last Updated: November 2025
Version: 1.0 (Multi-part release)
Status: Active development
If you use this dataset, please cite it using the BibTeX entry provided above. Thank you for contributing to responsible AI research!
- Downloads last month
- 52