Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
Document_Understanding
Document_Packet_Splitting
Document_Comprehension
Document_Classification
Document_Recognition
Document_Segmentation
DOI:
License:
File size: 15,005 Bytes
dd6b500 cabee74 dd6b500 d80c34a 8b57e3e 55c9f54 8b57e3e 393dd0b 8b57e3e 393dd0b 8b57e3e c93d9c0 8b57e3e 4a6a1f7 8b57e3e 4a6a1f7 8b57e3e d80c34a 97a0018 d80c34a c93d9c0 653ecec | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | ---
license: cc-by-nc-4.0
language:
- en
- ar
- hi
tags:
- Document_Understanding
- Document_Packet_Splitting
- Document_Comprehension
- Document_Classification
- Document_Recognition
- Document_Segmentation
pretty_name: DocSplit Benchmark
size_categories:
- 1M<n<10M
---
**In addition to the dataset, we release this repository containing the complete toolkit for generating the benchmark datasets, along with Jupyter notebooks for data analysis.**
# DocSplit: Document Packet Splitting Benchmark Generator
A toolkit for creating benchmark datasets to test document packet splitting systems. Document packet splitting is the task of separating concatenated multi-page documents into individual documents with correct page ordering.
## Overview
This toolkit generates five benchmark datasets of varying complexity to test how well models can:
1. **Detect document boundaries** within concatenated packets
2. **Classify document types** accurately
3. **Reconstruct correct page ordering** within each document
## Document Source
We uses the documents from **RVL-CDIP-N-MP**:
[https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp](https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp)
## Quick Start
### Clone from Hugging Face
This repository is hosted on Hugging Face at: [https://huggingface.co/datasets/amazon/doc_split](https://huggingface.co/datasets/amazon/doc_split)
Choose one of the following methods to download the repository:
#### Option 1: Using Git with Git LFS (Recommended)
Git LFS (Large File Storage) is required for Hugging Face datasets as they often contain large files.
**Install Git LFS:**
```bash
# Linux (Ubuntu/Debian):
sudo apt-get install git-lfs
git lfs install
# macOS (Homebrew):
brew install git-lfs
git lfs install
# Windows: Download from https://git-lfs.github.com, then run:
# git lfs install
```
**Clone the repository:**
```bash
git clone https://huggingface.co/datasets/amazon/doc_split
cd doc_split
pip install -r requirements.txt
```
#### Option 2: Using Hugging Face CLI
```bash
# 1. Install the Hugging Face Hub CLI
pip install -U "huggingface_hub[cli]"
# 2. (Optional) Login if authentication is required
huggingface-cli login
# 3. Download the dataset
huggingface-cli download amazon/doc_split --repo-type dataset --local-dir doc_split
# 4. Navigate and install dependencies
cd doc_split
pip install -r requirements.txt
```
#### Option 3: Using Python SDK (huggingface_hub)
```python
from huggingface_hub import snapshot_download
# Download the entire dataset repository
local_dir = snapshot_download(
repo_id="amazon/doc_split",
repo_type="dataset",
local_dir="doc_split"
)
print(f"Dataset downloaded to: {local_dir}")
```
Then install dependencies:
```bash
cd doc_split
pip install -r requirements.txt
```
#### Tips
- **Check Disk Space**: Hugging Face datasets can be large. Check the "Files and versions" tab on the Hugging Face page to see the total size before downloading.
- **Partial Clone**: If you only need specific files (e.g., code without large data files), use:
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/amazon/doc_split
cd doc_split
# Then selectively pull specific files:
git lfs pull --include="*.py"
```
---
## Usage
### Step 1: Create Assets
Convert raw PDFs into structured assets with page images (300 DPI PNG) and OCR text (Markdown).
#### Option A: AWS Textract OCR (Default)
Best for English documents. Processes all document categories with Textract.
```bash
python src/assets/run.py \
--raw-data-path data/raw_data \
--output-path data/assets \
--s3-bucket your-bucket-name \
--s3-prefix textract-temp \
--workers 10 \
--save-mapping
```
**Requirements:**
- AWS credentials configured (`aws configure`)
- S3 bucket for temporary file uploads
- No GPU required
#### Option B: Hybrid OCR (Textract + DeepSeek)
Uses Textract for most categories, DeepSeek OCR only for the "language" category (multilingual documents).
**Note:** For this project, DeepSeek OCR was used only for the "language" category and executed in AWS SageMaker AI with GPU instances (e.g., `ml.g6.xlarge`).
**1. Install flash-attention (Required for DeepSeek):**
```bash
# For CUDA 12.x with Python 3.12:
cd /mnt/sagemaker-nvme # Use larger disk for downloads
wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
pip install flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
# For other CUDA/Python versions: https://github.com/Dao-AILab/flash-attention/releases
```
**2. Set cache directory (Important for SageMaker):**
```bash
# SageMaker: Use larger NVMe disk instead of small home directory
export HF_HOME=/mnt/sagemaker-nvme/cache
export TRANSFORMERS_CACHE=/mnt/sagemaker-nvme/cache
```
**3. Run asset creation:**
```bash
python src/assets/run.py \
--raw-data-path data/raw_data \
--output-path data/assets \
--s3-bucket your-bucket-name \
--use-deepseek-for-language \
--workers 10 \
--save-mapping
```
**Requirements:**
- NVIDIA GPU with CUDA support (tested on ml.g6.xlarge)
- ~10GB+ disk space for model downloads
- flash-attention library installed
- AWS credentials (for Textract on non-language categories)
- S3 bucket (for Textract on non-language categories)
**How it works:**
- Documents in `raw_data/language/` → DeepSeek OCR (GPU)
- All other categories → AWS Textract (cloud)
#### Parameters
- `--raw-data-path`: Directory containing source PDFs organized by document type
- `--output-path`: Where to save extracted assets (images + OCR text)
- `--s3-bucket`: S3 bucket name (required for Textract)
- `--s3-prefix`: S3 prefix for temporary files (default: textract-temp)
- `--workers`: Number of parallel processes (default: 10)
- `--save-mapping`: Save CSV mapping document IDs to file paths
- `--use-deepseek-for-language`: Use DeepSeek OCR for "language" category only
- `--limit`: Process only N documents (useful for testing)
#### What Happens
1. Scans `raw_data/` directory for PDFs organized by document type
2. Extracts each page as 300 DPI PNG image
3. Runs OCR (Textract or DeepSeek) to extract text
4. Saves structured assets in `output-path/{doc_type}/{doc_name}/`
5. Optionally creates `document_mapping.csv` listing all processed documents
6. These assets become the input for Step 2 (benchmark generation)
#### Output Structure
```
data/assets/
└── {doc_type}/{filename}/
├── original/{filename}.pdf
└── pages/{page_num}/
├── page-{num}.png # 300 DPI image
└── page-{num}-textract.md # OCR text
```
## Interactive Notebooks
Explore the toolkit with Jupyter notebooks:
1. **`notebooks/01_create_assets.ipynb`** - Create assets from PDFs
2. **`notebooks/02_create_benchmarks.ipynb`** - Generate benchmarks with different strategies
3. **`notebooks/03_analyze_benchmarks.ipynb`** - Analyze and visualize benchmark statistics
## Benchmark Output Format
Each benchmark JSON contains:
```json
{
"benchmark_name": "poly_seq",
"strategy": "PolySeq",
"split": "train",
"created_at": "2026-01-30T12:00:00",
"documents": [
{
"spliced_doc_id": "splice_0001",
"source_documents": [
{"doc_type": "invoice", "doc_name": "doc1", "pages": [1,2,3]},
{"doc_type": "letter", "doc_name": "doc2", "pages": [1,2]}
],
"ground_truth": [
{"page_num": 1, "doc_type": "invoice", "source_doc": "doc1", "source_page": 1},
{"page_num": 2, "doc_type": "invoice", "source_doc": "doc1", "source_page": 2},
...
],
"total_pages": 5
}
],
"statistics": {
"total_spliced_documents": 1000,
"total_pages": 7500,
"unique_doc_types": 16
}
}
```
## Requirements
- Python 3.8+
- AWS credentials (for Textract OCR)
- Dependencies: `boto3`, `loguru`, `pymupdf`, `pillow`
---
### Generate Benchmark Datasets
```bash
# 1. Download and extract RVL-CDIP-N-MP source data from HuggingFace (1.25 GB)
# This dataset contains multi-page PDFs organized by document type
# (invoices, letters, forms, reports, etc.)
mkdir -p data/raw_data
cd data/raw_data
wget https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp/resolve/main/data.tar.gz
tar -xzf data.tar.gz
rm data.tar.gz
cd ../..
# 2. Create assets from raw PDFs
# Extracts each page as PNG image and runs OCR to get text
# These assets are then used in step 4 to create benchmark datasets
# Output: Structured assets in data/assets/ with images and text per page
python src/assets/run.py --raw-data-path data/raw_data --output-path data/assets
# 3. Generate benchmark datasets
# This concatenates documents using different strategies and creates
# train/test/validation splits with ground truth labels
# Output: Benchmark JSON files in data/benchmarks/ ready for model evaluation
python src/benchmarks/run.py \
--strategy poly_seq \
--assets-path data/assets \
--output-path data/benchmarks
```
## Pipeline Overview
```
Raw PDFs → [Create Assets] → Page Images + OCR Text → [Generate Benchmarks] → DocSplit Benchmarks
```
## Five Benchmark Datasets
The toolkit generates five benchmarks of increasing complexity, based on the DocSplit paper:
### 1. **DocSplit-Mono-Seq** (`mono_seq`)
**Single Category Document Concatenation Sequentially**
- Concatenates documents from the same category
- Preserves original page order
- **Challenge**: Boundary detection without category transitions as discriminative signals
- **Use Case**: Legal document processing where multiple contracts of the same type are bundled
### 2. **DocSplit-Mono-Rand** (`mono_rand`)
**Single Category Document Pages Randomization**
- Same as Mono-Seq but shuffles pages within documents
- **Challenge**: Boundary detection + page sequence reconstruction
- **Use Case**: Manual document assembly with page-level disruptions
### 3. **DocSplit-Poly-Seq** (`poly_seq`)
**Multi Category Documents Concatenation Sequentially**
- Concatenates documents from different categories
- Preserves page ordering
- **Challenge**: Inter-document boundary detection with category diversity
- **Use Case**: Medical claims processing with heterogeneous documents
### 4. **DocSplit-Poly-Int** (`poly_int`)
**Multi Category Document Pages Interleaving**
- Interleaves pages from different categories in round-robin fashion
- **Challenge**: Identifying which non-contiguous pages belong together
- **Use Case**: Mortgage processing where deeds, tax records, and notices are interspersed
### 5. **DocSplit-Poly-Rand** (`poly_rand`)
**Multi Category Document Pages Randomization**
- Complete randomization across all pages (maximum entropy)
- **Challenge**: Worst-case scenario with no structural assumptions
- **Use Case**: Document management system failures or emergency recovery
## Project Structure
```
doc-split-benchmark/
├── README.md
├── requirements.txt # All dependencies
├── src/
│ ├── assets/ # Asset creation from PDFs
│ │ ├── run.py # Main script
│ │ ├── models.py # Document models
│ │ └── services/
│ │ ├── pdf_loader.py
│ │ ├── textract_ocr.py
│ │ └── asset_writer.py
│ │
│ └── benchmarks/ # Benchmark generation
│ ├── run.py # Main script
│ ├── models.py # Benchmark models
│ └── services/
│ ├── asset_loader.py
│ ├── split_manager.py
│ ├── benchmark_generator.py
│ ├── benchmark_writer.py
│ └── strategies/
│ ├── mono_seq.py # DocSplit-Mono-Seq
│ ├── mono_rand.py # DocSplit-Mono-Rand
│ ├── poly_seq.py # DocSplit-Poly-Seq
│ ├── poly_int.py # DocSplit-Poly-Int
│ └── poly_rand.py # DocSplit-Poly-Rand
│
├── notebooks/ # Interactive examples
│ ├── 01_create_assets.ipynb
│ ├── 02_create_benchmarks.ipynb
│ └── 03_analyze_benchmarks.ipynb
│
└── data/ # Generated data (not in repo)
├── raw_data/ # Downloaded PDFs
├── assets/ # Extracted images + OCR
└── benchmarks/ # Generated benchmarks
```
### Generate Benchmarks [Detailed]
Create DocSplit benchmarks with train/test/validation splits.
```bash
python src/benchmarks/run.py \
--strategy poly_seq \
--assets-path data/assets \
--output-path data/benchmarks \
--num-docs-train 800 \
--num-docs-test 200 \
--num-docs-val 500 \
--size small \
--random-seed 42
```
**Parameters:**
- `--strategy`: Benchmark strategy - `mono_seq`, `mono_rand`, `poly_seq`, `poly_int`, `poly_rand`, or `all` (default: all)
- `--assets-path`: Directory containing assets from Step 1 (default: data/assets)
- `--output-path`: Where to save benchmarks (default: data/benchmarks)
- `--num-docs-train`: Number of spliced documents for training (default: 8)
- `--num-docs-test`: Number of spliced documents for testing (default: 5)
- `--num-docs-val`: Number of spliced documents for validation (default: 2)
- `--size`: Benchmark size - `small` (5-20 pages) or `large` (20-500 pages) (default: small)
- `--split-mapping`: Path to split mapping JSON (default: data/metadata/split_mapping.json)
- `--random-seed`: Seed for reproducibility (default: 42)
**What Happens:**
1. Loads all document assets from Step 1
2. Creates or loads stratified train/test/val split (60/25/15 ratio)
3. Generates spliced documents by concatenating/shuffling pages per strategy
4. Saves benchmark CSV files with ground truth labels
**Output Structure:**
```
data/
├── metadata/
│ └── split_mapping.json # Document split assignments (shared across strategies)
└── benchmarks/
└── {strategy}/ # e.g., poly_seq, mono_rand
└── {size}/ # small or large
├── train.csv
├── test.csv
└── validation.csv
```
# How to cite this dataset
```bibtex
@misc{islam2026docsplitcomprehensivebenchmarkdataset,
title={DocSplit: A Comprehensive Benchmark Dataset and Evaluation Approach for Document Packet Recognition and Splitting},
author={Md Mofijul Islam and Md Sirajus Salekin and Nivedha Balakrishnan and Vincil C. Bishop III and Niharika Jain and Spencer Romo and Bob Strahan and Boyi Xie and Diego A. Socolinsky},
year={2026},
eprint={2602.15958},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2602.15958},
}
```
# License
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: CC-BY-NC-4.0
|