Update README.md
Browse files
README.md
CHANGED
|
@@ -34,3 +34,450 @@ configs:
|
|
| 34 |
- split: placement
|
| 35 |
path: data/placement-*
|
| 36 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
- split: placement
|
| 35 |
path: data/placement-*
|
| 36 |
---
|
| 37 |
+
<!-- 新 benchmark 发布公告 -->
|
| 38 |
+
<div style="background-color: #ecfdf5; border-left: 4px solid #10b981; padding: 0.75em 1em; margin-top: 1em; color: #065f46; font-weight: bold; border-radius: 0.375em;">
|
| 39 |
+
🎉 本仓库为 <strong>RefSpatial</strong> 的新版本 <strong>RefSpatial-Bench-Expand</strong>!<br>
|
| 40 |
+
新版本在原始基础上<strong>扩充了室内场景</strong>(如工厂、商店),并新增了<strong>未涵盖的室外场景</strong>(如街道、停车场),进一步提升了空间理解任务的多样性与挑战性。
|
| 41 |
+
</div>
|
| 42 |
+
<div style="background-color: #fef3c7; border-left: 4px solid #f59e0b; padding: 0.75em 1em; margin-top: 1em; color: #78350f; font-weight: bold; border-radius: 0.375em;">
|
| 43 |
+
🏆 本Benchmark所属的文章<strong>RoboRefer</strong> 已被 <strong>NeurIPS 2025</strong> 正式接收!<br>
|
| 44 |
+
感谢大家的关注与支持! 🙌
|
| 45 |
+
</div>
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
<h1 style="display: flex; align-items: center; justify-content: center; font-size: 1.75em; font-weight: 600;">
|
| 52 |
+
|
| 53 |
+
<img src="assets/logo.png" style="height: 60px; flex-shrink: 0;">
|
| 54 |
+
|
| 55 |
+
<span style="line-height: 1.2; margin-left: 0px; text-align: center;">
|
| 56 |
+
RefSpatial-Bench: A Benchmark for Multi-step Spatial Referring
|
| 57 |
+
</span>
|
| 58 |
+
|
| 59 |
+
</h1>
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
<!-- # RefSpatial-Bench: A Benchmark for Multi-step Spatial Referring with Reasoning -->
|
| 63 |
+
|
| 64 |
+
<!-- [](https://huggingface.co/datasets/BAAI/RefSpatial-Bench) -->
|
| 65 |
+
|
| 66 |
+
<p align="center">
|
| 67 |
+
<a href="https://zhoues.github.io/RoboRefer"><img src="https://img.shields.io/badge/%F0%9F%8F%A0%20Project-Homepage-blue" alt="HomePage"></a>
|
| 68 |
+
|
| 69 |
+
<a href="https://arxiv.org/abs/2506.04308"><img src="https://img.shields.io/badge/arXiv-2506.04308-b31b1b.svg?logo=arxiv" alt="arXiv"></a>
|
| 70 |
+
|
| 71 |
+
<a href="https://github.com/Zhoues/RoboRefer"><img src="https://img.shields.io/badge/Code-RoboRefer-black?logo=github" alt="Project Homepage"></a>
|
| 72 |
+
|
| 73 |
+
<a href="https://huggingface.co/datasets/JingkunAn/RefSpatial"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-RefSpatial--Dataset-brightgreen" alt="Dataset"></a>
|
| 74 |
+
|
| 75 |
+
<a href="https://huggingface.co/collections/Zhoues/roborefer-and-refspatial-6857c97848fab02271310b89"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Weights-RoboRefer-yellow" alt="Weights"></a>
|
| 76 |
+
</p>
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
Welcome to **RefSpatial-Bench**, a challenging benchmark based on real-world cluttered scenes to evaluate more complex multi-step spatial referring with reasoning.
|
| 80 |
+
|
| 81 |
+
<img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fzhoues.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
|
| 82 |
+
<img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fanjingkun.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
<!-- ## 📝 Table of Contents
|
| 87 |
+
* [🎯 Tasks](#🎯-tasks)
|
| 88 |
+
* [🧠 Reasoning Steps](#🧠-reasoning-steps)
|
| 89 |
+
* [📁 Dataset Structure](#📁-dataset-structure)
|
| 90 |
+
* [🤗 Hugging Face Datasets Format (data/ folder)](#🤗-hugging-face-datasets-format-data-folder)
|
| 91 |
+
* [📂 Raw Data Format](#📂-raw-data-format)
|
| 92 |
+
* [🚀 How to Use Our Benchmark](#🚀-how-to-use-our-benchmark)
|
| 93 |
+
* [🤗 Method 1: Using Hugging Face datasets Library](#🤗-method-1-using-hugging-face-datasets-library)
|
| 94 |
+
* [📂 Method 2: Using Raw Data Files (JSON and Images)](#📂-method-2-using-raw-data-files-json-and-images)
|
| 95 |
+
* [🧐 Evaluating Our RoboRefer/RoboPoint](#🧐-evaluating-our-roborefer-model)
|
| 96 |
+
* [🧐 Evaluating Gemini 2.5 Series](#🧐-evaluating-gemini-25-pro)
|
| 97 |
+
* [🧐 Evaluating the Molmo Model](#🧐-evaluating-the-molmo-model)
|
| 98 |
+
* [📊 Dataset Statistics](#📊-dataset-statistics)
|
| 99 |
+
* [🏆 Performance Highlights](#🏆-performance-highlights)
|
| 100 |
+
* [📜 Citation](#📜-citation)
|
| 101 |
+
--- -->
|
| 102 |
+
|
| 103 |
+
## 🎯 Task Split
|
| 104 |
+
- Location Task: This task contains **100** samples, which requires model to predicts a 2D point indicating the **unique target object**.
|
| 105 |
+
|
| 106 |
+
- Placement Task: This task contains **100** samples, which requires model to predicts a 2D point within the **desired free space**.
|
| 107 |
+
|
| 108 |
+
- Unseen Set: This set comprises **77** samples from the Location/Placement task, specifically designed to **evaluate model generalization after SFT/RFT training on RefSpatial**, as it includes novel spatial relation combinations not present in RefSpatial.
|
| 109 |
+
|
| 110 |
+
<div style="background-color: #ffe4e6; border-left: 4px solid #dc2626; padding: 0.75em 1em; margin-top: 1em; color: #b91c1c; font-weight: bold; border-radius: 0.375em;"> ⚠️ Warning: If your model is not trained with RefSpatial, Unseen set should not be used for evaluation. </div>
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
## 🧠 Reasoning Steps
|
| 114 |
+
|
| 115 |
+
- We introduce *reasoning steps* (`step`) for each benchmark sample as the number of anchor objects and their spatial relations that help constrain the search space.
|
| 116 |
+
- A higher `step` value reflects greater reasoning complexity and a stronger need for spatial understanding and reasoning.
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
## 📁 Dataset Structure
|
| 120 |
+
|
| 121 |
+
We provide two formats:
|
| 122 |
+
|
| 123 |
+
<details>
|
| 124 |
+
<summary><strong>Hugging Face Datasets Format</strong></summary>
|
| 125 |
+
|
| 126 |
+
`data/` folder contains HF-compatible splits:
|
| 127 |
+
|
| 128 |
+
* `location`
|
| 129 |
+
* `placement`
|
| 130 |
+
* `unseen`
|
| 131 |
+
|
| 132 |
+
Each sample includes:
|
| 133 |
+
|
| 134 |
+
| Field | Description |
|
| 135 |
+
| :------- | :----------------------------------------------------------- |
|
| 136 |
+
| `id` | Unique integer ID |
|
| 137 |
+
| `object` | Natural language description of target (object or free area), which is extracted from the `prompt` |
|
| 138 |
+
| `prompt` | Full Referring expressions |
|
| 139 |
+
| `suffix` | Instruction for answer formatting (**different models may use different suffixes or none**; we provide the format used by RoboRefer) |
|
| 140 |
+
| `image` | RGB image (`datasets.Image`) |
|
| 141 |
+
| `mask` | Binary mask image (`datasets.Image`) |
|
| 142 |
+
| `step` | Reasoning complexity (number of anchor objects / spatial relations) |
|
| 143 |
+
|
| 144 |
+
</details>
|
| 145 |
+
|
| 146 |
+
<details>
|
| 147 |
+
<summary><strong>Raw Data Format</strong></summary>
|
| 148 |
+
|
| 149 |
+
For full reproducibility and visualization, we also include the original files under:
|
| 150 |
+
|
| 151 |
+
* `Location/`
|
| 152 |
+
* `Placement/`
|
| 153 |
+
* `Unseen/`
|
| 154 |
+
|
| 155 |
+
Each folder contains:
|
| 156 |
+
|
| 157 |
+
```
|
| 158 |
+
Location/
|
| 159 |
+
├── image/ # RGB images (e.g., 0.png, 1.png, ...)
|
| 160 |
+
├── mask/ # Ground truth binary masks
|
| 161 |
+
└── question.json # List of referring prompts and metadata
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
Each entry in `question.json` has the following format:
|
| 165 |
+
|
| 166 |
+
```json
|
| 167 |
+
{
|
| 168 |
+
"id": 40,
|
| 169 |
+
"object": "the second object from the left to the right on the nearest platform",
|
| 170 |
+
"prompt": "Please point out the second object from the left to the right on the nearest platform.",
|
| 171 |
+
"suffix": "Your answer should be formatted as a list of tuples, i.e. [(x1, y1)], ...",
|
| 172 |
+
"rgb_path": "image/40.png",
|
| 173 |
+
"mask_path": "mask/40.png",
|
| 174 |
+
"category": "location",
|
| 175 |
+
"step": 2
|
| 176 |
+
}
|
| 177 |
+
```
|
| 178 |
+
</details>
|
| 179 |
+
|
| 180 |
+
|
| 181 |
+
## 🚀 How to Use RefSpaital-Bench
|
| 182 |
+
|
| 183 |
+
|
| 184 |
+
<!-- This section explains different ways to load and use the RefSpatial-Bench dataset. -->
|
| 185 |
+
|
| 186 |
+
The official evaluation code is available at https://github.com/Zhoues/RoboRefer.
|
| 187 |
+
The following provides a quick guide on how to load and use the RefSpatial-Bench.
|
| 188 |
+
|
| 189 |
+
|
| 190 |
+
<details>
|
| 191 |
+
<summary><strong>Method 1: Using Hugging Face Library</strong></summary>
|
| 192 |
+
|
| 193 |
+
You can load the dataset easily using the `datasets` library:
|
| 194 |
+
|
| 195 |
+
```python
|
| 196 |
+
from datasets import load_dataset
|
| 197 |
+
|
| 198 |
+
# Load the entire dataset (all splits: location, placement, unseen)
|
| 199 |
+
# This returns a DatasetDict
|
| 200 |
+
dataset_dict = load_dataset("BAAI/RefSpatial-Bench")
|
| 201 |
+
|
| 202 |
+
# Access a specific split, for example 'location'
|
| 203 |
+
location_split_hf = dataset_dict["location"]
|
| 204 |
+
|
| 205 |
+
# Or load only a specific split directly (returns a Dataset object)
|
| 206 |
+
# location_split_direct = load_dataset("BAAI/RefSpatial-Bench", name="location")
|
| 207 |
+
|
| 208 |
+
# Access a sample from the location split
|
| 209 |
+
sample = location_split_hf[0]
|
| 210 |
+
|
| 211 |
+
# sample is a dictionary where 'rgb' and 'mask' are PIL Image objects
|
| 212 |
+
# To display (if in a suitable environment like a Jupyter notebook):
|
| 213 |
+
# sample["image"].show()
|
| 214 |
+
# sample["mask"].show()
|
| 215 |
+
|
| 216 |
+
print(f"Prompt (from HF Dataset): {sample['prompt']}")
|
| 217 |
+
print(f"Suffix (from HF Dataset): {sample['suffix']}")
|
| 218 |
+
print(f"Reasoning Steps (from HF Dataset): {sample['step']}")
|
| 219 |
+
```
|
| 220 |
+
</details>
|
| 221 |
+
|
| 222 |
+
<details>
|
| 223 |
+
<summary><strong>Method 2: Using Raw Data Files (JSON and Images)</strong></summary>
|
| 224 |
+
|
| 225 |
+
|
| 226 |
+
If you are working with the raw data format (e.g., after cloning the repository or downloading the raw files), you can load the questions from the `question.json` file for each split and then load the images and masks using a library like Pillow (PIL).
|
| 227 |
+
|
| 228 |
+
This example assumes you have the `location`, `placement`, and `unseen` folders (each containing `image/`, `mask/`, and `question.json`) in a known `base_data_path`.
|
| 229 |
+
|
| 230 |
+
```python
|
| 231 |
+
import json
|
| 232 |
+
import os
|
| 233 |
+
from PIL import Image
|
| 234 |
+
|
| 235 |
+
# Set the dataset split name and base directory path
|
| 236 |
+
split_name = "Location"
|
| 237 |
+
base_data_path = "." # Or set to your actual dataset path
|
| 238 |
+
|
| 239 |
+
# Load question.json file
|
| 240 |
+
question_file = os.path.join(base_data_path, split_name, "question.json")
|
| 241 |
+
try:
|
| 242 |
+
with open(question_file, 'r', encoding='utf-8') as f:
|
| 243 |
+
samples = json.load(f)
|
| 244 |
+
except FileNotFoundError:
|
| 245 |
+
print(f"File not found: {question_file}")
|
| 246 |
+
samples = []
|
| 247 |
+
|
| 248 |
+
# Process the first sample if available
|
| 249 |
+
if samples:
|
| 250 |
+
sample = samples[0]
|
| 251 |
+
print(f"\n--- Sample Info ---")
|
| 252 |
+
print(f"ID: {sample['id']}")
|
| 253 |
+
print(f"Prompt: {sample['prompt']}")
|
| 254 |
+
|
| 255 |
+
# Construct absolute paths to RGB image and mask
|
| 256 |
+
rgb_path = os.path.join(base_data_path, split_name, sample["rgb_path"])
|
| 257 |
+
mask_path = os.path.join(base_data_path, split_name, sample["mask_path"])
|
| 258 |
+
|
| 259 |
+
# Load images using Pillow
|
| 260 |
+
try:
|
| 261 |
+
rgb_image = Image.open(rgb_path)
|
| 262 |
+
mask_image = Image.open(mask_path)
|
| 263 |
+
sample["image"] = rgb_image
|
| 264 |
+
sample["mask"] = mask_image
|
| 265 |
+
print(f"RGB image size: {rgb_image.size}")
|
| 266 |
+
print(f"Mask image size: {mask_image.size}, mode: {mask_image.mode}")
|
| 267 |
+
except FileNotFoundError:
|
| 268 |
+
print(f"Image file not found:\n{rgb_path}\n{mask_path}")
|
| 269 |
+
except Exception as e:
|
| 270 |
+
print(f"Error loading images: {e}")
|
| 271 |
+
else:
|
| 272 |
+
print("No samples loaded.")
|
| 273 |
+
```
|
| 274 |
+
</details>
|
| 275 |
+
|
| 276 |
+
|
| 277 |
+
<details>
|
| 278 |
+
<summary><strong>Evaluating RoboRefer / RoboPoint</strong></summary>
|
| 279 |
+
|
| 280 |
+
To evaluate RoboRefer on RefSpatial-Bench:
|
| 281 |
+
|
| 282 |
+
1. **Prepare Input Prompt:**
|
| 283 |
+
|
| 284 |
+
Concatenate `sample["prompt"]` and `sample["suffix"]` to form the complete instruction.
|
| 285 |
+
|
| 286 |
+
```python
|
| 287 |
+
# Example for constructing the full input for a sample
|
| 288 |
+
full_input_instruction = sample["prompt"] + " " + sample["suffix"]
|
| 289 |
+
```
|
| 290 |
+
|
| 291 |
+
2. **Model Prediction & JSON Parsing & Coordinate Scaling:**
|
| 292 |
+
|
| 293 |
+
- **Model Prediction**: After providingthe image (`sample["image"]`) and `full_input_instruction` to the RoboRefer, it outputs **normalized coordinate in a JSON format** like`[(x, y),...]`, where each `x and `y` value is normalized to a range of 0-1.
|
| 294 |
+
|
| 295 |
+
- **JSON Parsing:** Parse this JSON string to extract the coordinate attributes (e.g., `x`, `y`).
|
| 296 |
+
|
| 297 |
+
- **Coordinate Scaling:**
|
| 298 |
+
|
| 299 |
+
1. Use `sample["image"].size` to get `(width, height)` and scale to the original image dimensions (height for y, width for x).
|
| 300 |
+
|
| 301 |
+
```python
|
| 302 |
+
# Example: model_output_robo is [(0.234, 0.567)] from Roborefer/RoboPoint
|
| 303 |
+
# sample["image"] is a PIL Image object loaded by the datasets library or loaded from the raw data
|
| 304 |
+
|
| 305 |
+
def text2pts(text, width, height):
|
| 306 |
+
pattern = r"\(([-+]?\d+\.?\d*(?:,\s*[-+]?\d+\.?\d*)*?)\)"
|
| 307 |
+
matches = re.findall(pattern, text)
|
| 308 |
+
points = []
|
| 309 |
+
for match in matches:
|
| 310 |
+
vector = [
|
| 311 |
+
float(num) if '.' in num else int(num) for num in match.split(',')
|
| 312 |
+
]
|
| 313 |
+
if len(vector) == 2:
|
| 314 |
+
x, y = vector
|
| 315 |
+
if isinstance(x, float) or isinstance(y, float):
|
| 316 |
+
x = int(x * width)
|
| 317 |
+
y = int(y * height)
|
| 318 |
+
points.append((x, y))
|
| 319 |
+
|
| 320 |
+
width, height = sample["image"].size
|
| 321 |
+
scaled_roborefer_points = text2pts(model_output_robo, width, height)
|
| 322 |
+
|
| 323 |
+
# These scaled_roborefer_points are then used for evaluation against the mask.
|
| 324 |
+
```
|
| 325 |
+
|
| 326 |
+
4. **Evaluation:** Compare `scaled_roborefer_points` against `sample["mask"]`. The main metric is **average success rate** — the percentage of predictions falling within the mask.
|
| 327 |
+
|
| 328 |
+
</details>
|
| 329 |
+
|
| 330 |
+
<details>
|
| 331 |
+
<summary><strong>Evaluating Gemini Series</strong></summary>
|
| 332 |
+
|
| 333 |
+
|
| 334 |
+
To evaluate Gemini Series on RefSpatial-Bench:
|
| 335 |
+
|
| 336 |
+
1. **Prepare Input Prompt:**
|
| 337 |
+
|
| 338 |
+
Concatenate the string `"Locate the points of"` and `sample["object"] ` to form the complete instruction.
|
| 339 |
+
|
| 340 |
+
```python
|
| 341 |
+
# Example for constructing the full input for a sample
|
| 342 |
+
full_input_instruction = "Locate the points of " + sample["object"] + "."
|
| 343 |
+
```
|
| 344 |
+
|
| 345 |
+
2. **Model Prediction & JSON Parsing & Coordinate Scaling:**
|
| 346 |
+
|
| 347 |
+
* **Model Prediction:** After providing the image (`sample["image"]`) and `full_input_instruction` to the Gemini model series, it outputs **normalized coordinates in an JSON format** like `"```json\n[\n {\"point\": [y, x], \"label\": \"free space\"}, ...\n]\n```"`, where each `y` and `x` value is normalized to a range of 0-1000.
|
| 348 |
+
|
| 349 |
+
* **JSON Parsing:** Parse this JSON string to extract the coordinate attributes (e.g., `x1`, `y1`, `x2`, `y2`, etc.).
|
| 350 |
+
|
| 351 |
+
* **Coordinate Conversion:** To use these coordinates for evaluation against the mask, they must be:
|
| 352 |
+
|
| 353 |
+
1. Divided by 1000.0 to normalize them to the 0.0-1.0 range.
|
| 354 |
+
2. Scaled to the original image dimensions (height for y, width for x).
|
| 355 |
+
```python
|
| 356 |
+
# Example: model_output_gemini is "```json\n[\n {\"point\": [438, 330], \"label\": \"free space\"}\n]\n```" from Gemini
|
| 357 |
+
# and sample["image"] is a PIL Image object loaded by the datasets library or loaded from the raw data
|
| 358 |
+
|
| 359 |
+
def json2pts(text, width, height):
|
| 360 |
+
match = re.search(r"```(?:\w+)?\n(.*?)```", text, re.DOTALL)
|
| 361 |
+
if not match:
|
| 362 |
+
print("No valid code block found.")
|
| 363 |
+
return np.empty((0, 2), dtype=int)
|
| 364 |
+
|
| 365 |
+
json_cleaned = match.group(1).strip()
|
| 366 |
+
|
| 367 |
+
try:
|
| 368 |
+
data = json.loads(json_cleaned)
|
| 369 |
+
except json.JSONDecodeError as e:
|
| 370 |
+
print(f"JSON decode error: {e}")
|
| 371 |
+
return np.empty((0, 2), dtype=int)
|
| 372 |
+
|
| 373 |
+
points = []
|
| 374 |
+
for item in data:
|
| 375 |
+
if "point" in item and isinstance(item["point"], list) and len(item["point"]) == 2:
|
| 376 |
+
y_norm, x_norm = item["point"]
|
| 377 |
+
x = int(x_norm / 1000 * width)
|
| 378 |
+
y = int(y_norm / 1000 * height)
|
| 379 |
+
points.append((x, y))
|
| 380 |
+
|
| 381 |
+
return np.array(points)
|
| 382 |
+
|
| 383 |
+
width, height = sample["image"].size
|
| 384 |
+
scaled_gemini_points = json2pts(model_output_gemini, width, height)
|
| 385 |
+
# These scaled_gemini_points are then used for evaluation against the mask.
|
| 386 |
+
```
|
| 387 |
+
|
| 388 |
+
3. **Evaluation:** Compare `scaled_gemini_points` against `sample["mask"]`. The main metric is **average success rate** — the percentage of predictions falling within the mask.
|
| 389 |
+
|
| 390 |
+
</details>
|
| 391 |
+
|
| 392 |
+
<details>
|
| 393 |
+
<summary><strong>Evaluating the Molmo</strong></summary>
|
| 394 |
+
|
| 395 |
+
To evaluate a Molmo model on this benchmark:
|
| 396 |
+
|
| 397 |
+
1. **Prepare Input Prompt:**
|
| 398 |
+
|
| 399 |
+
Concatenate `"Locate several points of"` and `sample["object"]` to form the complete instruction.
|
| 400 |
+
|
| 401 |
+
```python
|
| 402 |
+
# Example for constructing the full input for a sample
|
| 403 |
+
full_input_instruction = "Locate several points of " + sample["object"] + "."
|
| 404 |
+
```
|
| 405 |
+
|
| 406 |
+
2. **Model Prediction, XML Parsing, & Coordinate Scaling:**
|
| 407 |
+
|
| 408 |
+
- **Model Prediction**: After providing the image (`sample["image"]`) and `full_input_instruction` to the Molmo, it outputs **normalized coordinates in an XML format** like `<points x1="61.5" y1="40.4" x2="76.8" y2="21.8" ... />`, where each `x` and `y` value is normalized to a range of 0-100.
|
| 409 |
+
|
| 410 |
+
- **XML Parsing:** Parse this XML string to extract the coordinate attributes (e.g., `x1`, `y1`, `x2`, `y2`, etc.).
|
| 411 |
+
|
| 412 |
+
- **Coordinate Conversion:**
|
| 413 |
+
|
| 414 |
+
1. Divide each coordinate by 100.0 to normalize it to the 0.0-1.0 range.
|
| 415 |
+
2. Scaled to the original image dimensions (height for y, width for x).
|
| 416 |
+
```python
|
| 417 |
+
# Example: model_output_molmo is '<points x1="61.5" y1="40.4" x2="76.8" y2="21.8"/>' from Molmo
|
| 418 |
+
# and sample["image"] is a PIL Image object loaded by the datasets library or loaded from the raw data
|
| 419 |
+
|
| 420 |
+
def xml2pts(xml_text, width, height):
|
| 421 |
+
import re
|
| 422 |
+
pattern = re.compile(r'(x\d+)="(-?\d+\.?\d*)"\s+(y\d+)="(-?\d+\.?\d*)"')
|
| 423 |
+
matches = pattern.findall(xml_text)
|
| 424 |
+
points = [(int(float(x_val) / 100.0 * width), int(float(y_val) / 100.0 * height) ) for _, x_val, _, y_val in matches]
|
| 425 |
+
return np.array(points)
|
| 426 |
+
|
| 427 |
+
width, height = sample["image"].size
|
| 428 |
+
scaled_molmo_points = xml2pts(model_output_molmo, width, height)
|
| 429 |
+
# These scaled_molmo_points are then used for evaluation.
|
| 430 |
+
```
|
| 431 |
+
|
| 432 |
+
3. **Evaluation:** Compare `scaled_molmo_points` against `sample["mask"]`. The main metric is **average success rate** — the percentage of predictions falling within the mask.
|
| 433 |
+
</details>
|
| 434 |
+
|
| 435 |
+
|
| 436 |
+
## 📊 Dataset Statistics
|
| 437 |
+
|
| 438 |
+
Detailed statistics on `step` distributions and instruction lengths are provided in the table below.
|
| 439 |
+
|
| 440 |
+
| **RefSpatial-Bench** | **Step / Statistic** | **Samples** | **Avg. Prompt Length** |
|
| 441 |
+
| :------------------- | :------------------- | :---------- | :--------------------- |
|
| 442 |
+
| **Location** | Step 1 | 30 | 11.13 |
|
| 443 |
+
| | Step 2 | 38 | 11.97 |
|
| 444 |
+
| | Step 3 | 32 | 15.28 |
|
| 445 |
+
| | **Avg. (All)** | **100** | 12.78 |
|
| 446 |
+
| **Placement** | Step 2 | 43 | 15.47 |
|
| 447 |
+
| | Step 3 | 28 | 16.07 |
|
| 448 |
+
| | Step 4 | 22 | 22.68 |
|
| 449 |
+
| | Step 5 | 7 | 22.71 |
|
| 450 |
+
| | **Avg. (All)** | **100** | 17.68 |
|
| 451 |
+
| **Unseen** | Step 2 | 29 | 17.41 |
|
| 452 |
+
| | Step 3 | 26 | 17.46 |
|
| 453 |
+
| | Step 4 | 17 | 24.71 |
|
| 454 |
+
| | Step 5 | 5 | 23.8 |
|
| 455 |
+
| | **Avg. (All)** | **77** | 19.45 |
|
| 456 |
+
|
| 457 |
+
## 🏆 Performance Highlights
|
| 458 |
+
|
| 459 |
+
As our research shows, **RefSpatial-Bench** presents a significant challenge to current models. In the table below, bold text indicates Top-1 accuracy, and underline text indicates Top-2 accuracy.
|
| 460 |
+
|
| 461 |
+
| **Benchmark** | **Gemini-2.5-Pro** | **SpaceLLaVA** | **RoboPoint** | **Molmo-7B** | **Molmo-72B** | **RoboRefer 2B-SFT** | **RoboRefer 8B-SFT** | **RoboRefer 2B-RFT** |
|
| 462 |
+
| :----------------: | :----------------: | :------------: | :-----------: | :----------: | :-----------: | :------------: | :------------: | :------------: |
|
| 463 |
+
| RefSpatial-Bench-L | 46.96 | 5.82 | 22.87 | 21.91 | 45.77 | <u>47.00</u> | **52.00** | **52.00** |
|
| 464 |
+
| RefSpatial-Bench-P | 24.21 | 4.31 | 9.27 | 12.85 | 14.74 | 48.00 | <u>53.00</u> | **54.00** |
|
| 465 |
+
| RefSpatial-Bench-U | 27.14 | 4.02 | 8.40 | 12.23 | 21.24 | 33.77 | <u>37.66</u> | **41.56** |
|
| 466 |
+
|
| 467 |
+
## 📫 Contact
|
| 468 |
+
|
| 469 |
+
If you have any questions about the benchmark, feel free to email Jingkun (anjingkun02@gmail.com) and Enshen (zhouenshen@buaa.edu.cn).
|
| 470 |
+
<img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fzhoues.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
|
| 471 |
+
<img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fanjingkun.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
|
| 472 |
+
## 📜 Citation
|
| 473 |
+
|
| 474 |
+
Please consider citing our work if this benchmark is useful for your research.
|
| 475 |
+
|
| 476 |
+
```
|
| 477 |
+
@article{zhou2025roborefer,
|
| 478 |
+
title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
|
| 479 |
+
author={Zhou, Enshen and An, Jingkun and Chi, Cheng and Han, Yi and Rong, Shanyu and Zhang, Chi and Wang, Pengwei and Wang, Zhongyuan and Huang, Tiejun and Sheng, Lu and others},
|
| 480 |
+
journal={arXiv preprint arXiv:2506.04308},
|
| 481 |
+
year={2025}
|
| 482 |
+
}
|
| 483 |
+
```
|