richardyoung commited on
Commit
523e7a8
·
verified ·
1 Parent(s): 859e35f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +64 -11
README.md CHANGED
@@ -14,6 +14,7 @@ tags:
14
  - ehr
15
  - electronic-health-records
16
  - synthea
 
17
  size_categories:
18
  - 100K<n<1M
19
  ---
@@ -22,6 +23,8 @@ size_categories:
22
 
23
  A comprehensive synthetic healthcare dataset containing **575,415 patients** with complete medical histories, generated using [Synthea](https://github.com/synthetichealth/synthea) - an open-source synthetic patient generator.
24
 
 
 
25
  ## Dataset Description
26
 
27
  This dataset provides realistic but entirely synthetic patient records suitable for:
@@ -58,6 +61,37 @@ This dataset provides realistic but entirely synthetic patient records suitable
58
 
59
  **Total Size:** 134GB (Parquet), 977GB (CSV source)
60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  ## Data Schema
62
 
63
  ### patients.parquet
@@ -160,11 +194,11 @@ This dataset provides realistic but entirely synthetic patient records suitable
160
  import polars as pl
161
 
162
  # Load a single table
163
- patients = pl.read_parquet("hf://datasets/YOUR_USERNAME/synthea-575k/patients.parquet")
164
 
165
  # Load and join tables
166
- encounters = pl.read_parquet("hf://datasets/YOUR_USERNAME/synthea-575k/encounters.parquet")
167
- conditions = pl.read_parquet("hf://datasets/YOUR_USERNAME/synthea-575k/conditions.parquet")
168
 
169
  # Example: Get all conditions for a patient
170
  patient_conditions = conditions.filter(pl.col("PATIENT") == "some-uuid")
@@ -174,7 +208,14 @@ patient_conditions = conditions.filter(pl.col("PATIENT") == "some-uuid")
174
  # Using pandas
175
  import pandas as pd
176
 
177
- patients = pd.read_parquet("hf://datasets/YOUR_USERNAME/synthea-575k/patients.parquet")
 
 
 
 
 
 
 
178
  ```
179
 
180
  ## Demographics Summary
@@ -194,6 +235,7 @@ Based on 575,415 synthetic patients:
194
  - **Configuration:** Full clinical documentation enabled
195
  - **Modules:** 89 disease modules, 157 submodules
196
  - **Seed Range:** 3000-22000 (20 batches of 25K patients)
 
197
 
198
  ## Limitations
199
 
@@ -204,14 +246,16 @@ Based on 575,415 synthetic patients:
204
 
205
  ## Citation
206
 
207
- If you use this dataset, please cite:
208
 
209
  ```bibtex
210
- @misc{synthea575k,
211
  title={Synthea Synthetic Patient Records (575K Patients)},
212
- author={Generated with Synthea},
213
  year={2025},
214
- howpublished={Hugging Face Datasets},
 
 
215
  }
216
 
217
  @article{walonoski2018synthea,
@@ -221,7 +265,9 @@ If you use this dataset, please cite:
221
  volume={25},
222
  number={3},
223
  pages={230--238},
224
- year={2018}
 
 
225
  }
226
  ```
227
 
@@ -231,5 +277,12 @@ This dataset is released under the MIT License, consistent with Synthea's licens
231
 
232
  ## Acknowledgments
233
 
234
- - [Synthea](https://github.com/synthetichealth/synthea) - The MITRE Corporation
235
- - Generated using Synthea's open-source synthetic patient generator
 
 
 
 
 
 
 
 
14
  - ehr
15
  - electronic-health-records
16
  - synthea
17
+ - deepneuro
18
  size_categories:
19
  - 100K<n<1M
20
  ---
 
23
 
24
  A comprehensive synthetic healthcare dataset containing **575,415 patients** with complete medical histories, generated using [Synthea](https://github.com/synthetichealth/synthea) - an open-source synthetic patient generator.
25
 
26
+ **Dataset Curator:** [Richard Young](https://deepneuro.ai/richard) | [DeepNeuro.AI](https://deepneuro.ai)
27
+
28
  ## Dataset Description
29
 
30
  This dataset provides realistic but entirely synthetic patient records suitable for:
 
61
 
62
  **Total Size:** 134GB (Parquet), 977GB (CSV source)
63
 
64
+ ## Data Processing Pipeline
65
+
66
+ ### Generation Process
67
+ The dataset was generated using a batched approach to handle the scale:
68
+ - **20 batches** of 25,000 patients each
69
+ - **40GB Java heap** allocation per batch
70
+ - **32 CPU cores** for parallel generation
71
+ - Custom batch merging to preserve CSV headers (avoiding Synthea's append_mode bug)
72
+
73
+ ### Compression & Optimization
74
+ The raw CSV output (977GB) was converted to Parquet format achieving **86% compression**:
75
+
76
+ | Format | Size | Compression |
77
+ |--------|------|-------------|
78
+ | CSV (raw) | 977 GB | - |
79
+ | Parquet | 134 GB | 86% reduction |
80
+
81
+ **Conversion method:** Memory-efficient streaming using Polars `scan_csv()` + `sink_parquet()`:
82
+ - Processes files in chunks without loading entire files into memory
83
+ - Handles 30GB+ CSV files without OOM errors
84
+ - Schema overrides for mixed-type columns (claims, procedures, observations)
85
+ - 8 parallel workers for optimal throughput
86
+
87
+ ### Data Verification
88
+ Data integrity was verified through:
89
+ 1. **Header Validation:** All 19 CSV files confirmed to have correct headers
90
+ 2. **Row Count Verification:** Patient counts validated at each batch merge
91
+ 3. **Parquet Integrity:** All 18 Parquet files successfully written and readable
92
+ 4. **Foreign Key Validation:** Patient IDs verified across related tables
93
+ 5. **Schema Consistency:** Column types verified during Parquet conversion
94
+
95
  ## Data Schema
96
 
97
  ### patients.parquet
 
194
  import polars as pl
195
 
196
  # Load a single table
197
+ patients = pl.read_parquet("hf://datasets/richardyoung/synthea-575k-patients/data/patients.parquet")
198
 
199
  # Load and join tables
200
+ encounters = pl.read_parquet("hf://datasets/richardyoung/synthea-575k-patients/data/encounters.parquet")
201
+ conditions = pl.read_parquet("hf://datasets/richardyoung/synthea-575k-patients/data/conditions.parquet")
202
 
203
  # Example: Get all conditions for a patient
204
  patient_conditions = conditions.filter(pl.col("PATIENT") == "some-uuid")
 
208
  # Using pandas
209
  import pandas as pd
210
 
211
+ patients = pd.read_parquet("hf://datasets/richardyoung/synthea-575k-patients/data/patients.parquet")
212
+ ```
213
+
214
+ ```python
215
+ # Using datasets library
216
+ from datasets import load_dataset
217
+
218
+ dataset = load_dataset("richardyoung/synthea-575k-patients", data_files="data/patients.parquet")
219
  ```
220
 
221
  ## Demographics Summary
 
235
  - **Configuration:** Full clinical documentation enabled
236
  - **Modules:** 89 disease modules, 157 submodules
237
  - **Seed Range:** 3000-22000 (20 batches of 25K patients)
238
+ - **Hardware:** 62GB RAM, 32 CPU cores, 40GB Java heap
239
 
240
  ## Limitations
241
 
 
246
 
247
  ## Citation
248
 
249
+ If you use this dataset, please cite both the dataset and Synthea:
250
 
251
  ```bibtex
252
+ @misc{young2025synthea575k,
253
  title={Synthea Synthetic Patient Records (575K Patients)},
254
+ author={Young, Richard},
255
  year={2025},
256
+ publisher={Hugging Face},
257
+ howpublished={\url{https://huggingface.co/datasets/richardyoung/synthea-575k-patients}},
258
+ note={Generated using Synthea. Curated by DeepNeuro.AI}
259
  }
260
 
261
  @article{walonoski2018synthea,
 
265
  volume={25},
266
  number={3},
267
  pages={230--238},
268
+ year={2018},
269
+ publisher={Oxford University Press},
270
+ doi={10.1093/jamia/ocx079}
271
  }
272
  ```
273
 
 
277
 
278
  ## Acknowledgments
279
 
280
+ - **Dataset Curation:** [Richard Young](https://deepneuro.ai/richard) - [DeepNeuro.AI](https://deepneuro.ai)
281
+ - **Data Generation:** [Synthea](https://github.com/synthetichealth/synthea) - The MITRE Corporation
282
+ - **Processing Tools:** [Polars](https://pola.rs/) for memory-efficient data processing
283
+
284
+ ## Contact
285
+
286
+ For questions or issues with this dataset, please contact:
287
+ - Richard Young: [deepneuro.ai/richard](https://deepneuro.ai/richard)
288
+ - Open an issue on this dataset's community tab