JingkunAn commited on
Commit
b6f1a8d
Β·
verified Β·
1 Parent(s): 48603cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -16
README.md CHANGED
@@ -59,15 +59,14 @@ configs:
59
  <img src="https://huggingface.co/datasets/BAAI/RefSpatial-Bench/resolve/main/assets/logo.png" style="height: 60px; flex-shrink: 0;">
60
 
61
  <span style="line-height: 1.2; margin-left: 0px; text-align: center;">
62
- RefSpatial-Bench: A Benchmark for Multi-step Spatial Referring
63
  </span>
64
 
65
  </h1>
66
 
 
67
 
68
- <!-- # RefSpatial-Bench: A Benchmark for Multi-step Spatial Referring with Reasoning -->
69
-
70
- <!-- [![Generic badge](https://img.shields.io/badge/πŸ€—%20Datasets-BAAI/RefSpatial--Bench-blue.svg)](https://huggingface.co/datasets/BAAI/RefSpatial-Bench) -->
71
 
72
  <p align="center">
73
  <a href="https://zhoues.github.io/RoboRefer"><img src="https://img.shields.io/badge/%F0%9F%8F%A0%20Project-Homepage-blue" alt="HomePage"></a>
@@ -83,7 +82,7 @@ configs:
83
 
84
 
85
 
86
- Welcome to **RefSpatial-Bench**, a challenging benchmark based on real-world cluttered scenes to evaluate more complex multi-step spatial referring with reasoning.
87
 
88
  <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fzhoues.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
89
  <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fanjingkun.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
@@ -132,14 +131,13 @@ We provide two formats:
132
 
133
  * `location`
134
  * `placement`
135
- * `unseen`
136
 
137
  Each sample includes:
138
 
139
  | Field | Description |
140
  | :------- | :----------------------------------------------------------- |
141
  | `id` | Unique integer ID |
142
- | scene | indoor or outdoor |
143
  | `object` | Natural language description of target (object or free area), which is extracted from the `prompt` |
144
  | `prompt` | Full Referring expressions |
145
  | `suffix` | Instruction for answer formatting (**different models may use different suffixes or none**; we provide the format used by RoboRefer) |
@@ -157,7 +155,6 @@ For full reproducibility and visualization, we also include the original files u
157
 
158
  * `Location/`
159
  * `Placement/`
160
- * `Unseen/`
161
 
162
  Each folder contains:
163
 
@@ -180,7 +177,7 @@ Each entry in `question.json` has the following format:
180
  "mask_path": "mask/40.png",
181
  "category": "location",
182
  "step": 2,
183
- "scene": indoor
184
  }
185
  ```
186
 
@@ -190,10 +187,10 @@ Each entry in `question.json` has the following format:
190
  ## πŸš€ How to Use RefSpaital-Bench
191
 
192
 
193
- <!-- This section explains different ways to load and use the RefSpatial-Bench dataset. -->
194
 
195
  The official evaluation code is available at https://github.com/Zhoues/RoboRefer.
196
- The following provides a quick guide on how to load and use the RefSpatial-Bench.
197
 
198
 
199
  <details>
@@ -207,13 +204,13 @@ from datasets import load_dataset
207
 
208
  # Load the entire dataset (all splits: location, placement, unseen)
209
  # This returns a DatasetDict
210
- dataset_dict = load_dataset("BAAI/RefSpatial-Bench")
211
 
212
  # Access a specific split, for example 'location'
213
  location_split_hf = dataset_dict["location"]
214
 
215
  # Or load only a specific split directly (returns a Dataset object)
216
- # location_split_direct = load_dataset("BAAI/RefSpatial-Bench", name="location")
217
 
218
  # Access a sample from the location split
219
  sample = location_split_hf[0]
@@ -291,7 +288,7 @@ else:
291
  <summary><strong>Evaluating RoboRefer / RoboPoint</strong></summary>
292
 
293
 
294
- To evaluate RoboRefer on RefSpatial-Bench:
295
 
296
  1. **Prepare Input Prompt:**
297
 
@@ -346,7 +343,7 @@ To evaluate RoboRefer on RefSpatial-Bench:
346
 
347
 
348
 
349
- To evaluate Gemini Series on RefSpatial-Bench:
350
 
351
  1. **Prepare Input Prompt:**
352
 
@@ -478,7 +475,7 @@ Detailed statistics on `step` distributions and instruction lengths are provided
478
 
479
  ## πŸ† Performance Highlights
480
 
481
- Detailed accuracy results of RoboRefer-2B-SFT and RoboRefer-8B-SFT Models on RefSpatial-Bench-Expand
482
 
483
  #### **Location Task**
484
 
 
59
  <img src="https://huggingface.co/datasets/BAAI/RefSpatial-Bench/resolve/main/assets/logo.png" style="height: 60px; flex-shrink: 0;">
60
 
61
  <span style="line-height: 1.2; margin-left: 0px; text-align: center;">
62
+ RefSpatial-Expand-Bench: A Benchmark for Multi-step Spatial Referring
63
  </span>
64
 
65
  </h1>
66
 
67
+ <!-- # RefSpatial-Expand-Bench: A Benchmark for Multi-step Spatial Referring with Reasoning -->
68
 
69
+ <!-- [![Generic badge](https://img.shields.io/badge/πŸ€—%20Datasets-BAAI/RefSpatial--Expand--Bench-blue.svg)](https://huggingface.co/datasets/JingkunAn/RefSpatial-Expand-Bench) -->
 
 
70
 
71
  <p align="center">
72
  <a href="https://zhoues.github.io/RoboRefer"><img src="https://img.shields.io/badge/%F0%9F%8F%A0%20Project-Homepage-blue" alt="HomePage"></a>
 
82
 
83
 
84
 
85
+ Welcome to **RefSpatial-Expand-Bench**, a challenging benchmark based on real-world cluttered scenes to evaluate more complex multi-step spatial referring with reasoning.
86
 
87
  <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fzhoues.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
88
  <img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fanjingkun.github.io&labelColor=%232ccce4&countColor=%230158f9" alt="visitor badge" style="display: none;" />
 
131
 
132
  * `location`
133
  * `placement`
 
134
 
135
  Each sample includes:
136
 
137
  | Field | Description |
138
  | :------- | :----------------------------------------------------------- |
139
  | `id` | Unique integer ID |
140
+ | `scene` | Indoor or outdoor |
141
  | `object` | Natural language description of target (object or free area), which is extracted from the `prompt` |
142
  | `prompt` | Full Referring expressions |
143
  | `suffix` | Instruction for answer formatting (**different models may use different suffixes or none**; we provide the format used by RoboRefer) |
 
155
 
156
  * `Location/`
157
  * `Placement/`
 
158
 
159
  Each folder contains:
160
 
 
177
  "mask_path": "mask/40.png",
178
  "category": "location",
179
  "step": 2,
180
+ "scene": "indoor"
181
  }
182
  ```
183
 
 
187
  ## πŸš€ How to Use RefSpaital-Bench
188
 
189
 
190
+ <!-- This section explains different ways to load and use the RefSpatial-Expand-Bench dataset. -->
191
 
192
  The official evaluation code is available at https://github.com/Zhoues/RoboRefer.
193
+ The following provides a quick guide on how to load and use the RefSpatial-Expand-Bench.
194
 
195
 
196
  <details>
 
204
 
205
  # Load the entire dataset (all splits: location, placement, unseen)
206
  # This returns a DatasetDict
207
+ dataset_dict = load_dataset("JingkunAn/RefSpatial-Expand-Bench")
208
 
209
  # Access a specific split, for example 'location'
210
  location_split_hf = dataset_dict["location"]
211
 
212
  # Or load only a specific split directly (returns a Dataset object)
213
+ # location_split_direct = load_dataset("JingkunAn/RefSpatial-Expand-Bench", name="location")
214
 
215
  # Access a sample from the location split
216
  sample = location_split_hf[0]
 
288
  <summary><strong>Evaluating RoboRefer / RoboPoint</strong></summary>
289
 
290
 
291
+ To evaluate RoboRefer on RefSpatial-Expand-Bench:
292
 
293
  1. **Prepare Input Prompt:**
294
 
 
343
 
344
 
345
 
346
+ To evaluate Gemini Series on RefSpatial-Expand-Bench:
347
 
348
  1. **Prepare Input Prompt:**
349
 
 
475
 
476
  ## πŸ† Performance Highlights
477
 
478
+ Detailed accuracy results of RoboRefer-2B-SFT and RoboRefer-8B-SFT Models on RefSpatial-Expand-Bench
479
 
480
  #### **Location Task**
481