JingkunAn commited on
Commit
a683a85
Β·
verified Β·
1 Parent(s): fa6c050

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -72
README.md CHANGED
@@ -33,26 +33,31 @@ configs:
33
  path: data/location-*
34
  - split: placement
35
  path: data/placement-*
 
36
  ---
37
 
38
 
39
 
40
  <!-- New benchmark release announcement -->
 
41
  <div style="background-color: #ecfdf5; border-left: 4px solid #10b981; padding: 0.75em 1em; margin-top: 1em; color: #065f46; font-weight: bold; border-radius: 0.375em;">
42
  πŸŽ‰ This repository contains the new version of <strong>RefSpatial-Bench</strong> β€” <strong>RefSpatial-Expand-Bench</strong>!<br>
43
  Based on the original benchmark, the new version <strong>extends indoor scenes</strong> (e.g., factories, stores) and adds <strong>previously uncovered outdoor scenarios</strong> (e.g., streets, parking lots), providing a more comprehensive evaluation of spatial referring tasks.
44
  </div>
45
 
 
46
  <div style="background-color: #fef3c7; border-left: 4px solid #f59e0b; padding: 0.75em 1em; margin-top: 1em; color: #78350f; font-weight: bold; border-radius: 0.375em;">
47
  πŸ† The paper associated with this benchmark, <strong>RoboRefer</strong>, has been accepted to <strong>NeurIPS 2025</strong>!<br>
48
  Thank you all for your attention and support! πŸ™Œ
49
  </div>
50
 
51
 
 
52
  <h1 style="display: flex; align-items: center; justify-content: center; font-size: 1.75em; font-weight: 600;">
53
-
 
54
  <img src="https://huggingface.co/datasets/BAAI/RefSpatial-Bench/resolve/main/assets/logo.png" style="height: 60px; flex-shrink: 0;">
55
-
56
  <span style="line-height: 1.2; margin-left: 0px; text-align: center;">
57
  RefSpatial-Bench: A Benchmark for Multi-step Spatial Referring
58
  </span>
@@ -63,7 +68,7 @@ configs:
63
  <!-- # RefSpatial-Bench: A Benchmark for Multi-step Spatial Referring with Reasoning -->
64
 
65
  <!-- [![Generic badge](https://img.shields.io/badge/πŸ€—%20Datasets-BAAI/RefSpatial--Bench-blue.svg)](https://huggingface.co/datasets/BAAI/RefSpatial-Bench) -->
66
-
67
  <p align="center">
68
  <a href="https://zhoues.github.io/RoboRefer"><img src="https://img.shields.io/badge/%F0%9F%8F%A0%20Project-Homepage-blue" alt="HomePage"></a>
69
  &nbsp;
@@ -77,6 +82,7 @@ configs:
77
  </p>
78
 
79
 
 
80
  Welcome to **RefSpatial-Bench**, a challenging benchmark based on real-world cluttered scenes to evaluate more complex multi-step spatial referring with reasoning.
81
 
82
  <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fzhoues.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
@@ -85,6 +91,7 @@ Welcome to **RefSpatial-Bench**, a challenging benchmark based on real-world clu
85
 
86
 
87
  <!-- ## πŸ“ Table of Contents
 
88
  * [🎯 Tasks](#🎯-tasks)
89
  * [🧠 Reasoning Steps](#🧠-reasoning-steps)
90
  * [πŸ“ Dataset Structure](#πŸ“-dataset-structure)
@@ -99,16 +106,12 @@ Welcome to **RefSpatial-Bench**, a challenging benchmark based on real-world clu
99
  * [πŸ“Š Dataset Statistics](#πŸ“Š-dataset-statistics)
100
  * [πŸ† Performance Highlights](#πŸ†-performance-highlights)
101
  * [πŸ“œ Citation](#πŸ“œ-citation)
102
- --- -->
103
 
104
  ## 🎯 Task Split
105
- - Location Task: This task contains **100** samples, which requires model to predicts a 2D point indicating the **unique target object**.
106
-
107
- - Placement Task: This task contains **100** samples, which requires model to predicts a 2D point within the **desired free space**.
108
 
109
- - Unseen Set: This set comprises **77** samples from the Location/Placement task, specifically designed to **evaluate model generalization after SFT/RFT training on RefSpatial**, as it includes novel spatial relation combinations not present in RefSpatial.
110
-
111
- <div style="background-color: #ffe4e6; border-left: 4px solid #dc2626; padding: 0.75em 1em; margin-top: 1em; color: #b91c1c; font-weight: bold; border-radius: 0.375em;"> ⚠️ Warning: If your model is not trained with RefSpatial, Unseen set should not be used for evaluation. </div>
112
 
113
 
114
  ## 🧠 Reasoning Steps
@@ -124,6 +127,7 @@ We provide two formats:
124
  <details>
125
  <summary><strong>Hugging Face Datasets Format</strong></summary>
126
 
 
127
  `data/` folder contains HF-compatible splits:
128
 
129
  * `location`
@@ -135,6 +139,7 @@ Each sample includes:
135
  | Field | Description |
136
  | :------- | :----------------------------------------------------------- |
137
  | `id` | Unique integer ID |
 
138
  | `object` | Natural language description of target (object or free area), which is extracted from the `prompt` |
139
  | `prompt` | Full Referring expressions |
140
  | `suffix` | Instruction for answer formatting (**different models may use different suffixes or none**; we provide the format used by RoboRefer) |
@@ -147,6 +152,7 @@ Each sample includes:
147
  <details>
148
  <summary><strong>Raw Data Format</strong></summary>
149
 
 
150
  For full reproducibility and visualization, we also include the original files under:
151
 
152
  * `Location/`
@@ -173,9 +179,11 @@ Each entry in `question.json` has the following format:
173
  "rgb_path": "image/40.png",
174
  "mask_path": "mask/40.png",
175
  "category": "location",
176
- "step": 2
 
177
  }
178
  ```
 
179
  </details>
180
 
181
 
@@ -191,6 +199,7 @@ The following provides a quick guide on how to load and use the RefSpatial-Bench
191
  <details>
192
  <summary><strong>Method 1: Using Hugging Face Library</strong></summary>
193
 
 
194
  You can load the dataset easily using the `datasets` library:
195
 
196
  ```python
@@ -218,12 +227,14 @@ print(f"Prompt (from HF Dataset): {sample['prompt']}")
218
  print(f"Suffix (from HF Dataset): {sample['suffix']}")
219
  print(f"Reasoning Steps (from HF Dataset): {sample['step']}")
220
  ```
 
221
  </details>
222
 
223
  <details>
224
  <summary><strong>Method 2: Using Raw Data Files (JSON and Images)</strong></summary>
225
 
226
 
 
227
  If you are working with the raw data format (e.g., after cloning the repository or downloading the raw files), you can load the questions from the `question.json` file for each split and then load the images and masks using a library like Pillow (PIL).
228
 
229
  This example assumes you have the `location`, `placement`, and `unseen` folders (each containing `image/`, `mask/`, and `question.json`) in a known `base_data_path`.
@@ -272,17 +283,19 @@ if samples:
272
  else:
273
  print("No samples loaded.")
274
  ```
 
275
  </details>
276
 
277
 
278
  <details>
279
  <summary><strong>Evaluating RoboRefer / RoboPoint</strong></summary>
280
 
 
281
  To evaluate RoboRefer on RefSpatial-Bench:
282
 
283
  1. **Prepare Input Prompt:**
284
 
285
- Concatenate `sample["prompt"]` and `sample["suffix"]` to form the complete instruction.
286
 
287
  ```python
288
  # Example for constructing the full input for a sample
@@ -324,7 +337,7 @@ To evaluate RoboRefer on RefSpatial-Bench:
324
  # These scaled_roborefer_points are then used for evaluation against the mask.
325
  ```
326
 
327
- 4. **Evaluation:** Compare `scaled_roborefer_points` against `sample["mask"]`. The main metric is **average success rate** β€” the percentage of predictions falling within the mask.
328
 
329
  </details>
330
 
@@ -332,6 +345,7 @@ To evaluate RoboRefer on RefSpatial-Bench:
332
  <summary><strong>Evaluating Gemini Series</strong></summary>
333
 
334
 
 
335
  To evaluate Gemini Series on RefSpatial-Bench:
336
 
337
  1. **Prepare Input Prompt:**
@@ -345,46 +359,47 @@ To evaluate Gemini Series on RefSpatial-Bench:
345
 
346
  2. **Model Prediction & JSON Parsing & Coordinate Scaling:**
347
 
348
- * **Model Prediction:** After providing the image (`sample["image"]`) and `full_input_instruction` to the Gemini model series, it outputs **normalized coordinates in an JSON format** like `"```json\n[\n {\"point\": [y, x], \"label\": \"free space\"}, ...\n]\n```"`, where each `y` and `x` value is normalized to a range of 0-1000.
349
-
350
- * **JSON Parsing:** Parse this JSON string to extract the coordinate attributes (e.g., `x1`, `y1`, `x2`, `y2`, etc.).
351
-
352
- * **Coordinate Conversion:** To use these coordinates for evaluation against the mask, they must be:
353
-
354
- 1. Divided by 1000.0 to normalize them to the 0.0-1.0 range.
355
- 2. Scaled to the original image dimensions (height for y, width for x).
356
- ```python
357
- # Example: model_output_gemini is "```json\n[\n {\"point\": [438, 330], \"label\": \"free space\"}\n]\n```" from Gemini
358
- # and sample["image"] is a PIL Image object loaded by the datasets library or loaded from the raw data
359
-
360
- def json2pts(text, width, height):
361
- match = re.search(r"```(?:\w+)?\n(.*?)```", text, re.DOTALL)
362
- if not match:
363
- print("No valid code block found.")
364
- return np.empty((0, 2), dtype=int)
 
365
 
366
- json_cleaned = match.group(1).strip()
367
 
368
- try:
369
- data = json.loads(json_cleaned)
370
- except json.JSONDecodeError as e:
371
- print(f"JSON decode error: {e}")
372
- return np.empty((0, 2), dtype=int)
373
 
374
- points = []
375
- for item in data:
376
- if "point" in item and isinstance(item["point"], list) and len(item["point"]) == 2:
377
- y_norm, x_norm = item["point"]
378
- x = int(x_norm / 1000 * width)
379
- y = int(y_norm / 1000 * height)
380
- points.append((x, y))
381
 
382
- return np.array(points)
383
-
384
- width, height = sample["image"].size
385
- scaled_gemini_points = json2pts(model_output_gemini, width, height)
386
- # These scaled_gemini_points are then used for evaluation against the mask.
387
- ```
388
 
389
  3. **Evaluation:** Compare `scaled_gemini_points` against `sample["mask"]`. The main metric is **average success rate** β€” the percentage of predictions falling within the mask.
390
 
@@ -393,6 +408,7 @@ To evaluate Gemini Series on RefSpatial-Bench:
393
  <details>
394
  <summary><strong>Evaluating the Molmo</strong></summary>
395
 
 
396
  To evaluate a Molmo model on this benchmark:
397
 
398
  1. **Prepare Input Prompt:**
@@ -414,6 +430,7 @@ To evaluate a Molmo model on this benchmark:
414
 
415
  1. Divide each coordinate by 100.0 to normalize it to the 0.0-1.0 range.
416
  2. Scaled to the original image dimensions (height for y, width for x).
 
417
  ```python
418
  # Example: model_output_molmo is '<points x1="61.5" y1="40.4" x2="76.8" y2="21.8"/>' from Molmo
419
  # and sample["image"] is a PIL Image object loaded by the datasets library or loaded from the raw data
@@ -431,45 +448,70 @@ To evaluate a Molmo model on this benchmark:
431
  ```
432
 
433
  3. **Evaluation:** Compare `scaled_molmo_points` against `sample["mask"]`. The main metric is **average success rate** β€” the percentage of predictions falling within the mask.
434
- </details>
435
 
436
 
437
  ## πŸ“Š Dataset Statistics
438
 
439
  Detailed statistics on `step` distributions and instruction lengths are provided in the table below.
440
 
441
- | **RefSpatial-Bench** | **Step / Statistic** | **Samples** | **Avg. Prompt Length** |
442
- | :------------------- | :------------------- | :---------- | :--------------------- |
443
- | **Location** | Step 1 | 30 | 11.13 |
444
- | | Step 2 | 38 | 11.97 |
445
- | | Step 3 | 32 | 15.28 |
446
- | | **Avg. (All)** | **100** | 12.78 |
447
- | **Placement** | Step 2 | 43 | 15.47 |
448
- | | Step 3 | 28 | 16.07 |
449
- | | Step 4 | 22 | 22.68 |
450
- | | Step 5 | 7 | 22.71 |
451
- | | **Avg. (All)** | **100** | 17.68 |
452
- | **Unseen** | Step 2 | 29 | 17.41 |
453
- | | Step 3 | 26 | 17.46 |
454
- | | Step 4 | 17 | 24.71 |
455
- | | Step 5 | 5 | 23.8 |
456
- | | **Avg. (All)** | **77** | 19.45 |
 
 
 
 
457
 
458
  ## πŸ† Performance Highlights
459
 
460
- As our research shows, **RefSpatial-Bench** presents a significant challenge to current models. In the table below, bold text indicates Top-1 accuracy, and underline text indicates Top-2 accuracy.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
461
 
462
- | **Benchmark** | **Gemini-2.5-Pro** | **SpaceLLaVA** | **RoboPoint** | **Molmo-7B** | **Molmo-72B** | **RoboRefer 2B-SFT** | **RoboRefer 8B-SFT** | **RoboRefer 2B-RFT** |
463
- | :----------------: | :----------------: | :------------: | :-----------: | :----------: | :-----------: | :------------: | :------------: | :------------: |
464
- | RefSpatial-Bench-L | 46.96 | 5.82 | 22.87 | 21.91 | 45.77 | <u>47.00</u> | **52.00** | **52.00** |
465
- | RefSpatial-Bench-P | 24.21 | 4.31 | 9.27 | 12.85 | 14.74 | 48.00 | <u>53.00</u> | **54.00** |
466
- | RefSpatial-Bench-U | 27.14 | 4.02 | 8.40 | 12.23 | 21.24 | 33.77 | <u>37.66</u> | **41.56** |
467
 
468
  ## πŸ“« Contact
469
 
470
  If you have any questions about the benchmark, feel free to email Jingkun (anjingkun02@gmail.com) and Enshen (zhouenshen@buaa.edu.cn).
471
  <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fzhoues.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
472
  <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fanjingkun.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
 
473
  ## πŸ“œ Citation
474
 
475
  Please consider citing our work if this benchmark is useful for your research.
 
33
  path: data/location-*
34
  - split: placement
35
  path: data/placement-*
36
+
37
  ---
38
 
39
 
40
 
41
  <!-- New benchmark release announcement -->
42
+
43
  <div style="background-color: #ecfdf5; border-left: 4px solid #10b981; padding: 0.75em 1em; margin-top: 1em; color: #065f46; font-weight: bold; border-radius: 0.375em;">
44
  πŸŽ‰ This repository contains the new version of <strong>RefSpatial-Bench</strong> β€” <strong>RefSpatial-Expand-Bench</strong>!<br>
45
  Based on the original benchmark, the new version <strong>extends indoor scenes</strong> (e.g., factories, stores) and adds <strong>previously uncovered outdoor scenarios</strong> (e.g., streets, parking lots), providing a more comprehensive evaluation of spatial referring tasks.
46
  </div>
47
 
48
+
49
  <div style="background-color: #fef3c7; border-left: 4px solid #f59e0b; padding: 0.75em 1em; margin-top: 1em; color: #78350f; font-weight: bold; border-radius: 0.375em;">
50
  πŸ† The paper associated with this benchmark, <strong>RoboRefer</strong>, has been accepted to <strong>NeurIPS 2025</strong>!<br>
51
  Thank you all for your attention and support! πŸ™Œ
52
  </div>
53
 
54
 
55
+
56
  <h1 style="display: flex; align-items: center; justify-content: center; font-size: 1.75em; font-weight: 600;">
57
+
58
+
59
  <img src="https://huggingface.co/datasets/BAAI/RefSpatial-Bench/resolve/main/assets/logo.png" style="height: 60px; flex-shrink: 0;">
60
+
61
  <span style="line-height: 1.2; margin-left: 0px; text-align: center;">
62
  RefSpatial-Bench: A Benchmark for Multi-step Spatial Referring
63
  </span>
 
68
  <!-- # RefSpatial-Bench: A Benchmark for Multi-step Spatial Referring with Reasoning -->
69
 
70
  <!-- [![Generic badge](https://img.shields.io/badge/πŸ€—%20Datasets-BAAI/RefSpatial--Bench-blue.svg)](https://huggingface.co/datasets/BAAI/RefSpatial-Bench) -->
71
+
72
  <p align="center">
73
  <a href="https://zhoues.github.io/RoboRefer"><img src="https://img.shields.io/badge/%F0%9F%8F%A0%20Project-Homepage-blue" alt="HomePage"></a>
74
  &nbsp;
 
82
  </p>
83
 
84
 
85
+
86
  Welcome to **RefSpatial-Bench**, a challenging benchmark based on real-world cluttered scenes to evaluate more complex multi-step spatial referring with reasoning.
87
 
88
  <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fzhoues.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
 
91
 
92
 
93
  <!-- ## πŸ“ Table of Contents
94
+
95
  * [🎯 Tasks](#🎯-tasks)
96
  * [🧠 Reasoning Steps](#🧠-reasoning-steps)
97
  * [πŸ“ Dataset Structure](#πŸ“-dataset-structure)
 
106
  * [πŸ“Š Dataset Statistics](#πŸ“Š-dataset-statistics)
107
  * [πŸ† Performance Highlights](#πŸ†-performance-highlights)
108
  * [πŸ“œ Citation](#πŸ“œ-citation)
109
+ --- -->
110
 
111
  ## 🎯 Task Split
 
 
 
112
 
113
+ - Location Task: This task contains **241** samples, which requires model to predicts a 2D point indicating the **unique target object**.
114
+ - Placement Task: This task contains **200** samples, which requires model to predicts a 2D point within the **desired free space**.
 
115
 
116
 
117
  ## 🧠 Reasoning Steps
 
127
  <details>
128
  <summary><strong>Hugging Face Datasets Format</strong></summary>
129
 
130
+
131
  `data/` folder contains HF-compatible splits:
132
 
133
  * `location`
 
139
  | Field | Description |
140
  | :------- | :----------------------------------------------------------- |
141
  | `id` | Unique integer ID |
142
+ | scene | indoor or outdoor |
143
  | `object` | Natural language description of target (object or free area), which is extracted from the `prompt` |
144
  | `prompt` | Full Referring expressions |
145
  | `suffix` | Instruction for answer formatting (**different models may use different suffixes or none**; we provide the format used by RoboRefer) |
 
152
  <details>
153
  <summary><strong>Raw Data Format</strong></summary>
154
 
155
+
156
  For full reproducibility and visualization, we also include the original files under:
157
 
158
  * `Location/`
 
179
  "rgb_path": "image/40.png",
180
  "mask_path": "mask/40.png",
181
  "category": "location",
182
+ "step": 2,
183
+ "scene": indoor
184
  }
185
  ```
186
+
187
  </details>
188
 
189
 
 
199
  <details>
200
  <summary><strong>Method 1: Using Hugging Face Library</strong></summary>
201
 
202
+
203
  You can load the dataset easily using the `datasets` library:
204
 
205
  ```python
 
227
  print(f"Suffix (from HF Dataset): {sample['suffix']}")
228
  print(f"Reasoning Steps (from HF Dataset): {sample['step']}")
229
  ```
230
+
231
  </details>
232
 
233
  <details>
234
  <summary><strong>Method 2: Using Raw Data Files (JSON and Images)</strong></summary>
235
 
236
 
237
+
238
  If you are working with the raw data format (e.g., after cloning the repository or downloading the raw files), you can load the questions from the `question.json` file for each split and then load the images and masks using a library like Pillow (PIL).
239
 
240
  This example assumes you have the `location`, `placement`, and `unseen` folders (each containing `image/`, `mask/`, and `question.json`) in a known `base_data_path`.
 
283
  else:
284
  print("No samples loaded.")
285
  ```
286
+
287
  </details>
288
 
289
 
290
  <details>
291
  <summary><strong>Evaluating RoboRefer / RoboPoint</strong></summary>
292
 
293
+
294
  To evaluate RoboRefer on RefSpatial-Bench:
295
 
296
  1. **Prepare Input Prompt:**
297
 
298
+ Concatenate `sample["prompt"]` and `sample["suffix"]` to form the complete instruction.
299
 
300
  ```python
301
  # Example for constructing the full input for a sample
 
337
  # These scaled_roborefer_points are then used for evaluation against the mask.
338
  ```
339
 
340
+ 3. **Evaluation:** Compare `scaled_roborefer_points` against `sample["mask"]`. The main metric is **average success rate** β€” the percentage of predictions falling within the mask.
341
 
342
  </details>
343
 
 
345
  <summary><strong>Evaluating Gemini Series</strong></summary>
346
 
347
 
348
+
349
  To evaluate Gemini Series on RefSpatial-Bench:
350
 
351
  1. **Prepare Input Prompt:**
 
359
 
360
  2. **Model Prediction & JSON Parsing & Coordinate Scaling:**
361
 
362
+ * **Model Prediction:** After providing the image (`sample["image"]`) and `full_input_instruction` to the Gemini model series, it outputs **normalized coordinates in an JSON format** like `"```json\n[\n {\"point\": [y, x], \"label\": \"free space\"}, ...\n]\n```"`, where each `y` and `x` value is normalized to a range of 0-1000.
363
+
364
+ * **JSON Parsing:** Parse this JSON string to extract the coordinate attributes (e.g., `x1`, `y1`, `x2`, `y2`, etc.).
365
+
366
+ * **Coordinate Conversion:** To use these coordinates for evaluation against the mask, they must be:
367
+
368
+ 1. Divided by 1000.0 to normalize them to the 0.0-1.0 range.
369
+ 2. Scaled to the original image dimensions (height for y, width for x).
370
+
371
+ ```python
372
+ # Example: model_output_gemini is "```json\n[\n {\"point\": [438, 330], \"label\": \"free space\"}\n]\n```" from Gemini
373
+ # and sample["image"] is a PIL Image object loaded by the datasets library or loaded from the raw data
374
+
375
+ def json2pts(text, width, height):
376
+ match = re.search(r"```(?:\w+)?\n(.*?)```", text, re.DOTALL)
377
+ if not match:
378
+ print("No valid code block found.")
379
+ return np.empty((0, 2), dtype=int)
380
 
381
+ json_cleaned = match.group(1).strip()
382
 
383
+ try:
384
+ data = json.loads(json_cleaned)
385
+ except json.JSONDecodeError as e:
386
+ print(f"JSON decode error: {e}")
387
+ return np.empty((0, 2), dtype=int)
388
 
389
+ points = []
390
+ for item in data:
391
+ if "point" in item and isinstance(item["point"], list) and len(item["point"]) == 2:
392
+ y_norm, x_norm = item["point"]
393
+ x = int(x_norm / 1000 * width)
394
+ y = int(y_norm / 1000 * height)
395
+ points.append((x, y))
396
 
397
+ return np.array(points)
398
+
399
+ width, height = sample["image"].size
400
+ scaled_gemini_points = json2pts(model_output_gemini, width, height)
401
+ # These scaled_gemini_points are then used for evaluation against the mask.
402
+ ```
403
 
404
  3. **Evaluation:** Compare `scaled_gemini_points` against `sample["mask"]`. The main metric is **average success rate** β€” the percentage of predictions falling within the mask.
405
 
 
408
  <details>
409
  <summary><strong>Evaluating the Molmo</strong></summary>
410
 
411
+
412
  To evaluate a Molmo model on this benchmark:
413
 
414
  1. **Prepare Input Prompt:**
 
430
 
431
  1. Divide each coordinate by 100.0 to normalize it to the 0.0-1.0 range.
432
  2. Scaled to the original image dimensions (height for y, width for x).
433
+
434
  ```python
435
  # Example: model_output_molmo is '<points x1="61.5" y1="40.4" x2="76.8" y2="21.8"/>' from Molmo
436
  # and sample["image"] is a PIL Image object loaded by the datasets library or loaded from the raw data
 
448
  ```
449
 
450
  3. **Evaluation:** Compare `scaled_molmo_points` against `sample["mask"]`. The main metric is **average success rate** β€” the percentage of predictions falling within the mask.
451
+ </details>
452
 
453
 
454
  ## πŸ“Š Dataset Statistics
455
 
456
  Detailed statistics on `step` distributions and instruction lengths are provided in the table below.
457
 
458
+ | Task Type | Indoor | Outdoor | Total |
459
+ | --------- | ------- | ------- | ------- |
460
+ | Location | 115 | 126 | 241 |
461
+ | Placement | 120 | 80 | 200 |
462
+ | **Total** | **235** | **206** | **441** |
463
+
464
+ | Task Type | Step | Samples | Avg. Prompt Length |
465
+ | --------- | -------------- | ------- | ------------------ |
466
+ | Location | Step 1 | 54 | 10.61 |
467
+ | | Step 2 | 129 | 12.56 |
468
+ | | Step 3 | 58 | 16.10 |
469
+ | | **Avg. (All)** | **241** | **12.98** |
470
+ | Placement | Step 1 | 3 | 15.00 |
471
+ | | Step 2 | 86 | 15.14 |
472
+ | | Step 3 | 75 | 16.95 |
473
+ | | Step 4 | 29 | 22.24 |
474
+ | | Step 5 | 7 | 22.71 |
475
+ | | **Avg. (All)** | **200** | **17.11** |
476
+
477
+
478
 
479
  ## πŸ† Performance Highlights
480
 
481
+ Detailed accuracy results of RoboRefer-2B-SFT and RoboRefer-8B-SFT Models on RefSpatial-Bench-Expand
482
+
483
+ #### **Location Task**
484
+
485
+ | Category | 2B SFT | 8B SFT |
486
+ | -------- | ------ | ------ |
487
+ | Overall | 50.21 | 61.00 |
488
+ | Indoor | 49.57 | 58.26 |
489
+ | Outdoor | 50.79 | 63.49 |
490
+ | Step 1 | 61.11 | 72.22 |
491
+ | Step 2 | 52.71 | 62.02 |
492
+ | Step 3 | 34.48 | 48.28 |
493
+
494
+ #### **Placement Task**
495
+
496
+ | Category | 2B SFT | 8B SFT |
497
+ | -------- | ------ | ------ |
498
+ | Overall | 48.50 | 60.00 |
499
+ | Indoor | 50.83 | 60.00 |
500
+ | Outdoor | 45.00 | 60.00 |
501
+ | Step 1 | 33.33 | 33.33 |
502
+ | Step 2 | 41.86 | 51.16 |
503
+ | Step 3 | 54.67 | 70.67 |
504
+ | Step 4 | 48.28 | 55.17 |
505
+ | Step 5 | 71.43 | 85.71 |
506
+
507
 
 
 
 
 
 
508
 
509
  ## πŸ“« Contact
510
 
511
  If you have any questions about the benchmark, feel free to email Jingkun (anjingkun02@gmail.com) and Enshen (zhouenshen@buaa.edu.cn).
512
  <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fzhoues.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
513
  <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fanjingkun.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
514
+
515
  ## πŸ“œ Citation
516
 
517
  Please consider citing our work if this benchmark is useful for your research.