Datasets:
SoccerNet-GAR: Pixels or Positions? Benchmarking Modalities in Group Activity Recognition
SoccerNet-GAR is a large-scale multimodal dataset for Group Activity Recognition (GAR) built from all 64 matches of the FIFA World Cup 2022 tournament. It provides synchronized broadcast video and player tracking data for 94,285 annotated group activities across 10 action classes, enabling direct comparison between video-based and tracking-based approaches.
Dataset Details
Description
SoccerNet-GAR is the first dataset to provide synchronized tracking and video modalities for the same action instances in group activity recognition. For each annotated event, a 4.5-second temporal window is extracted from both the broadcast video and player tracking streams, centered on the event timestamp. Within each window, 16 samples are taken at 30 fps with a 9-frame interval.
The dataset contains two input modalities:
- Video Modality: Broadcast footage at 720p resolution, including multi-view broadcast cameras. Each frame is part of a temporal sequence sampled within the event window, capturing appearance cues, scene context, and visual motion patterns.
- Tracking Modality: 2D player positions and 3D ball coordinates sampled at 30 fps, automatically extracted from broadcast footage and manually refined by annotators. Player positions span x in [-60, 60]m, y in [-42, 41]m; ball positions include height z in [-8, 25]m. Each entity state encodes spatial coordinates, entity identity (one-hot encoding), and motion dynamics (displacement vectors between consecutive frames).
| Property | Value |
|---|---|
| Curated by | SoccerNet Team (KAUST, University of Liege) |
| Original Data Source | Gradient Sports (formerly PFF FC) |
| Total Events | 94,285 |
| Matches | 64 (FIFA World Cup 2022) |
| Action Classes | 10 |
| Modalities | Video + Tracking |
| Avg. Events per Match | 1,473 |
Sources
- Repository: https://github.com/drishyakarki/pixels_vs_positions
- Paper: Pixels or Positions? Benchmarking Modalities in Group Activity Recognition (arXiv:2511.12606)
Dataset Structure
Action Classes
The dataset contains 10 action classes reflecting common football events:
| Class | Count | Proportion |
|---|---|---|
| PASS | 59,657 | 63.3% |
| TACKLE | 11,107 | 11.8% |
| OUT | 6,389 | 6.8% |
| HEADER | 5,803 | 6.2% |
| HIGH PASS | 2,697 | 2.9% |
| THROW IN | 2,618 | 2.8% |
| CROSS | 2,412 | 2.6% |
| FREE KICK | 1,827 | 1.9% |
| SHOT | 1,559 | 1.7% |
| GOAL | 216 | 0.2% |
The dataset exhibits severe class imbalance (276:1 ratio between PASS and GOAL), reflecting the natural distribution of football events.
Splits
Data is split at the match level to prevent leakage:
| Split | Matches | Events | Proportion |
|---|---|---|---|
| Train | 45 | 66,901 | 71.0% |
| Validation | 9 | 12,865 | 13.6% |
| Test | 10 | 14,519 | 15.4% |
Branches
This repository is organized into the following branches:
| Branch | Contents |
|---|---|
main |
Dataset card and documentation. |
paper-data |
The exact dataset needed to reproduce the results in the paper. Contains broadcast videos (1 npy clip per event) and tracking files (1 parquet file per full match). |
frames |
1 npy clip per event for the video modality. Annotations are in SoccerNetPro format. |
tracking-parquet |
1 parquet clip per event for the tracking modality. Annotations are in SoccerNetPro format. |
multimodal-data |
Combined video (npy) and tracking (parquet) data with 1 file per event per modality. Uses a unified annotation file for both modalities in SoccerNetPro format. |
Benchmark Results
Pixels vs. Positions
| Modality | Model | Params | Balanced Acc. | Training Time |
|---|---|---|---|---|
| Tracking | GIN + Attention + Positional Edges | 197K | 67.2% | 4 GPU hours |
| Video | VideoMAEv2 (finetuned) | 86.3M | 58.1% | 34 GPU hours |
The tracking model outperforms the video baseline by 9.1 percentage points while using 438x fewer parameters and training 4.25x faster.
Per-Class Comparison (Test Set)
| Class | Samples | Tracking | Video |
|---|---|---|---|
| PASS | 9,255 | 70.1 | 65.2 |
| TACKLE | 1,697 | 50.7 | 57.8 |
| OUT | 955 | 85.7 | 78.7 |
| HEADER | 872 | 75.0 | 55.1 |
| HIGH PASS | 405 | 27.2 | 40.5 |
| THROW IN | 393 | 65.6 | 40.5 |
| CROSS | 373 | 81.8 | 72.9 |
| FREE KICK | 273 | 86.5 | 71.1 |
| SHOT | 266 | 73.2 | 58.7 |
| GOAL | 30 | 56.7 | 36.7 |
| Overall | 14,519 | 67.2 | 57.7 |
Tracking excels on spatially distinctive events (OUT, FREE KICK, CROSS, SHOT, GOAL), while video outperforms on HIGH PASS (+13.3%) and TACKLE (+7.1%) where visual cues like ball trajectory and body dynamics provide discriminative information.
Uses
Direct Use
- Benchmarking video-based vs. tracking-based group activity recognition
- Training and evaluating GAR models on football broadcast data
- Studying multimodal fusion approaches combining visual and positional features
- Analyzing spatial interaction patterns in team sports
Dataset Creation
Curation Rationale
No standardized benchmark previously existed that aligns broadcast video and tracking data for the same group activities. This made fair, apples-to-apples comparison between video-based and tracking-based approaches impossible. SoccerNet-GAR was created to fill this gap by providing synchronized multimodal observations under a unified evaluation protocol.
Source Data
The dataset was constructed from the PFF FC website (now Gradient Sports), which provides comprehensive broadcast videos, player tracking data, and event annotations across all 64 FIFA World Cup 2022 tournament matches.
Annotation Process
Event annotations with precise timestamps were created by trained annotators and verified through quality control procedures by PFF FC using both video and tracking views. Each event is labeled with one of 10 group activities and temporally marked at the moment of occurrence.
For each annotated event at timestamp t_e, a 4.5-second temporal window centered at t_e is extracted from both broadcast video and player tracking streams, with 16 samples taken at 30 fps with a 9-frame interval.
Comparison with Existing Datasets
| Dataset | Year | Domain | Events | Classes | Modalities |
|---|---|---|---|---|---|
| CAD | 2009 | Pedestrian | 2,511 | 5 | V |
| Volleyball | 2016 | Volleyball | 4,830 | 8 | V |
| SoccerNet | 2018 | Football | 6,637 | 3 | V |
| NBA | 2020 | Basketball | 9,172 | 9 | V |
| SoccerNet-v2 | 2021 | Football | 110,458 | 17 | V |
| SoccerNet-BAS | 2024 | Football | 11,041 | 12 | V |
| FIFAWC | 2024 | Football | 5,196 | 12 | V |
| SoccerNet-GAR | 2025 | Football | 94,285 | 10 | V + T |
SoccerNet-GAR is the second largest GAR dataset (after SoccerNet-v2) and the only one providing synchronized video and tracking modalities.
Citation
@article{karki2025pixels,
title={Pixels or Positions? Benchmarking Modalities in Group Activity Recognition},
author={Karki, Drishya and Ramazanova, Merey and Cioppa, Anthony and Giancola, Silvio and Ghanem, Bernard},
journal={arXiv preprint arXiv:2511.12606},
year={2025}
}
Authors
- Drishya Karki (KAUST)
- Merey Ramazanova (KAUST)
- Anthony Cioppa (University of Liege)
- Silvio Giancola (KAUST)
- Bernard Ghanem (KAUST)
Contact
- Downloads last month
- 22