--- license: cc-by-4.0 language: - ur - en tags: - TTS - ASR - Urdu - TextToSpeech - AutomaticSpeechRecognition - English - Transcribe - Translate - speech-recognition - urdu-speech - multilingual task_categories: - text-to-speech - text-to-audio - translation - automatic-speech-recognition - audio-classification size_categories: - 100K 0.5) print(f"Original samples: {len(dataset)}") print(f"High-quality samples: {len(high_quality)}") ``` ### Example: Fine-tuning Whisper for Urdu ASR ```python from transformers import WhisperProcessor, WhisperForConditionalGeneration from datasets import load_dataset # Load model and processor processor = WhisperProcessor.from_pretrained("openai/whisper-small") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") # Load dataset dataset = load_dataset("humair025/UrduMegaSpeech", split="train") # Filter by duration (e.g., 2-15 seconds) dataset = dataset.filter(lambda x: 2.0 <= x['duration'] <= 15.0) # Preprocess function def prepare_dataset(batch): audio = batch["audio"] batch["input_features"] = processor( audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt" ).input_features[0] batch["labels"] = processor.tokenizer(batch["transcription"]).input_ids return batch # Process dataset dataset = dataset.map(prepare_dataset, remove_columns=["audio"]) ``` ### Example: Speech Translation with Quality Filtering ```python from datasets import load_dataset # Load dataset dataset = load_dataset("humair025/UrduMegaSpeech", split="train") # Filter high-quality samples filtered_dataset = dataset.filter(lambda x: x['sonar_score'] > 0.6) # Use for speech translation training for sample in filtered_dataset: urdu_audio = sample['audio'] urdu_text = sample['transcription'] english_text = sample['text'] # Train your speech translation model ``` ## Dataset Statistics - **Total Audio Hours:** Extensive coverage for robust model training - **Average Duration:** ~8 seconds per sample - **Vocabulary Size:** Comprehensive Urdu lexicon - **Quality Scores:** Pre-computed quality metrics for easy filtering - **Speaker Diversity:** Multiple speakers with varied accents ## Quality Metrics Explained - **text_lid_score**: Language identification confidence - **laser_score**: Alignment quality between source and target - **sonar_score**: Semantic similarity score (0-1+ range, higher is better) These scores allow researchers to filter and select high-quality samples based on their specific requirements. ## Licensing & Attribution This dataset is released under the **CC-BY-4.0** license. **Source:** This dataset is derived from publicly available multilingual speech data (AI4Bharat). **Citation:** When using this dataset, please cite: ```bibtex @dataset{urdumegaspeech2025, title = {UrduMegaSpeech-1M: A Large-Scale Urdu Speech Corpus}, author = {Humair, Muhammad}, year = {2025}, publisher = {Hugging Face}, url = {https://huggingface.co/datasets/humair025/UrduMegaSpeech}, note = {Processed from multilingual speech collections} } ``` ## Ethical Considerations - This dataset is intended for research and development purposes - Users should ensure compliance with privacy regulations when deploying models trained on this data - The dataset reflects natural speech patterns and may contain colloquialisms - Care should be taken to avoid bias when using this data for production systems - Quality scores should be used to filter samples for production applications ## Limitations - Audio quality may vary across samples - Speaker diversity may not represent all Urdu dialects equally - Some samples may have lower alignment scores - Domain-specific terminology may be underrepresented - **Dataset Viewer**: HuggingFace dataset viewer may not be available due to the large size and format of this dataset. Please download and process locally. ## Technical Specifications - **Audio Encoding:** Various formats (converted to standard format upon loading) - **Sampling Rates:** Multiple rates (resampling to 16kHz recommended) - **Text Encoding:** UTF-8 - **File Format:** Parquet - **Recommended Filtering:** Filter by `duration` (2-15 seconds) and `sonar_score` (>0.5) for optimal results ## Recommended Preprocessing ```python # Recommended filtering for high-quality training data filtered = dataset.filter( lambda x: 2.0 <= x['duration'] <= 15.0 and x['sonar_score'] > 0.5 ) ``` ## Acknowledgments This dataset was compiled and processed to support Urdu language technology research and development. Data sourced from AI4Bharat multilingual collections. --- **Dataset Curated By:** Humair Munir **Last Updated:** December 2025 **Version:** 1.0