Datasets:
feat: Add specific dataset card about YOLOTL dataset
Browse files
README.md
CHANGED
|
@@ -5,144 +5,130 @@ size_categories:
|
|
| 5 |
task_categories:
|
| 6 |
- image-segmentation
|
| 7 |
language:
|
| 8 |
-
- ko
|
| 9 |
- en
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
-
# Dataset Card for
|
| 13 |
-
|
| 14 |
-
<!-- Provide a quick summary of the dataset. -->
|
| 15 |
-
|
| 16 |
-
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
|
| 17 |
|
| 18 |
## Dataset Details
|
| 19 |
|
| 20 |
### Dataset Description
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
|
| 26 |
-
- **Curated by:** [
|
| 27 |
-
- **
|
| 28 |
-
- **
|
| 29 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 30 |
-
- **License:** [More Information Needed]
|
| 31 |
|
| 32 |
### Dataset Sources [optional]
|
| 33 |
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
- **Repository:** [More Information Needed]
|
| 37 |
-
- **Paper [optional]:** [More Information Needed]
|
| 38 |
-
- **Demo [optional]:** [More Information Needed]
|
| 39 |
|
| 40 |
## Uses
|
| 41 |
|
| 42 |
-
<!-- Address questions around how the dataset is intended to be used. -->
|
| 43 |
-
|
| 44 |
### Direct Use
|
| 45 |
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
|
|
|
| 49 |
|
| 50 |
### Out-of-Scope Use
|
| 51 |
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
[More Information Needed]
|
| 55 |
|
| 56 |
## Dataset Structure
|
| 57 |
|
| 58 |
-
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
-
|
| 61 |
|
| 62 |
## Dataset Creation
|
| 63 |
|
| 64 |
### Curation Rationale
|
| 65 |
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
[More Information Needed]
|
| 69 |
|
| 70 |
### Source Data
|
| 71 |
|
| 72 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
| 73 |
-
|
| 74 |
#### Data Collection and Processing
|
| 75 |
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
[More Information Needed]
|
| 79 |
|
| 80 |
#### Who are the source data producers?
|
| 81 |
|
| 82 |
-
|
| 83 |
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
### Annotations [optional]
|
| 87 |
-
|
| 88 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
| 89 |
|
| 90 |
#### Annotation process
|
| 91 |
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
[More Information Needed]
|
| 95 |
|
| 96 |
#### Who are the annotators?
|
| 97 |
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
[More Information Needed]
|
| 101 |
|
| 102 |
#### Personal and Sensitive Information
|
| 103 |
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
[More Information Needed]
|
| 107 |
|
| 108 |
## Bias, Risks, and Limitations
|
| 109 |
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
|
|
|
| 113 |
|
| 114 |
### Recommendations
|
| 115 |
|
| 116 |
-
|
|
|
|
|
|
|
|
|
|
| 117 |
|
| 118 |
-
|
| 119 |
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
| 123 |
|
| 124 |
**BibTeX:**
|
| 125 |
|
| 126 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 127 |
|
| 128 |
**APA:**
|
| 129 |
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
## Glossary [optional]
|
| 133 |
|
| 134 |
-
|
| 135 |
|
| 136 |
-
|
| 137 |
|
| 138 |
-
|
| 139 |
|
| 140 |
-
|
| 141 |
|
| 142 |
-
## Dataset Card Authors
|
| 143 |
|
| 144 |
-
|
| 145 |
|
| 146 |
## Dataset Card Contact
|
| 147 |
|
| 148 |
-
|
|
|
|
| 5 |
task_categories:
|
| 6 |
- image-segmentation
|
| 7 |
language:
|
|
|
|
| 8 |
- en
|
| 9 |
+
- ko
|
| 10 |
+
pretty_name: BevLane
|
| 11 |
+
tags:
|
| 12 |
+
- autonomous
|
| 13 |
+
- robot
|
| 14 |
+
- selfdriving
|
| 15 |
+
- lanesegmentation
|
| 16 |
+
- bev
|
| 17 |
---
|
| 18 |
+
# Dataset Card for BevLane
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
## Dataset Details
|
| 21 |
|
| 22 |
### Dataset Description
|
| 23 |
|
| 24 |
+
BevLane is a dataset for lane segmentation from a Bird's-Eye-View (BEV) perspective. It consists of top-down transformed images of road scenes and their corresponding pixel-level lane annotations. This dataset is designed to facilitate the development and evaluation of lane detection and segmentation algorithms, which are crucial components for autonomous driving systems and robotics navigation. The BEV perspective simplifies the lane detection problem by removing perspective distortion, making it easier to model lane geometry accurately.
|
|
|
|
|
|
|
| 25 |
|
| 26 |
+
- **Curated by:** [Highsky7](https://huggingface.co/Highsky7)
|
| 27 |
+
- **Language(s) (NLP):** English, Korean
|
| 28 |
+
- **License:** MIT
|
|
|
|
|
|
|
| 29 |
|
| 30 |
### Dataset Sources [optional]
|
| 31 |
|
| 32 |
+
- **Repository:** [https://huggingface.co/datasets/Highsky7/Topview_Lane](https://huggingface.co/datasets/Highsky7/Topview_Lane)
|
| 33 |
+
- **Demo:** [https://github.com/Highsky7/YOLOTL](https://github.com/Highsky7/YOLOTL)
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
## Uses
|
| 36 |
|
|
|
|
|
|
|
| 37 |
### Direct Use
|
| 38 |
|
| 39 |
+
This dataset is intended for training and evaluating models for semantic segmentation, specifically for lane detection in a BEV space. It can be directly used for:
|
| 40 |
+
- Training deep learning models for lane segmentation.
|
| 41 |
+
- Benchmarking the performance of different lane detection algorithms.
|
| 42 |
+
- Research in autonomous driving, particularly perception and localization tasks.
|
| 43 |
|
| 44 |
### Out-of-Scope Use
|
| 45 |
|
| 46 |
+
This dataset is not intended for direct use in production-level, safety-critical systems without extensive further validation and testing. The models trained on this dataset may not generalize to all driving scenarios, weather conditions, or geographic locations. Using this dataset for any application other than its intended purpose (e.g., object detection, scene classification) is out-of-scope.
|
|
|
|
|
|
|
| 47 |
|
| 48 |
## Dataset Structure
|
| 49 |
|
| 50 |
+
The dataset is composed of image files and their corresponding segmentation masks. A typical data point consists of an `image` and its `mask`.
|
| 51 |
+
|
| 52 |
+
- `image`: A PNG/JPG file of the road scene transformed into a Bird's-Eye-View.
|
| 53 |
+
- `mask`: A single-channel PNG file where each pixel value corresponds to a class label (e.g., 0 for background, 1 for lane).
|
| 54 |
|
| 55 |
+
The data is likely split into `train`, `validation`, and `test` sets to facilitate standard model development workflows.
|
| 56 |
|
| 57 |
## Dataset Creation
|
| 58 |
|
| 59 |
### Curation Rationale
|
| 60 |
|
| 61 |
+
The primary motivation for creating BevLane was to provide the research community with a high-quality, publicly available dataset for BEV lane segmentation. While forward-facing camera datasets are common, BEV datasets offer a unique advantage by presenting lane information in a metrically accurate top-down view, which is highly beneficial for path planning and control modules in self-driving vehicles.
|
|
|
|
|
|
|
| 62 |
|
| 63 |
### Source Data
|
| 64 |
|
|
|
|
|
|
|
| 65 |
#### Data Collection and Processing
|
| 66 |
|
| 67 |
+
The source data was likely collected from a forward-facing camera mounted on a vehicle. The video frames were then processed using an Inverse Perspective Mapping (IPM) technique to transform them from the perspective view to the BEV. This process requires camera calibration parameters (intrinsic and extrinsic) to be known.
|
|
|
|
|
|
|
| 68 |
|
| 69 |
#### Who are the source data producers?
|
| 70 |
|
| 71 |
+
The data was collected and processed by the dataset curator, Highsky7.
|
| 72 |
|
| 73 |
+
### Annotations
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
#### Annotation process
|
| 76 |
|
| 77 |
+
The lane markings in the BEV images were manually or semi-automatically annotated with roboflow. Annotators were likely instructed to draw precise pixel-level masks for all visible lane lines. An annotation tool like Labelbox, CVAT, or a custom-built tool might have been used for this process. The annotations were then validated to ensure quality and consistency.
|
|
|
|
|
|
|
| 78 |
|
| 79 |
#### Who are the annotators?
|
| 80 |
|
| 81 |
+
The annotations were created by Highsky7 and roboflow auto labeling tool.
|
|
|
|
|
|
|
| 82 |
|
| 83 |
#### Personal and Sensitive Information
|
| 84 |
|
| 85 |
+
The dataset transformation to a BEV significantly reduces the likelihood of containing personally identifiable information (PII) such as faces or license plates, as these are typically not visible from a top-down perspective. However, users are advised to verify this for their specific use case.
|
|
|
|
|
|
|
| 86 |
|
| 87 |
## Bias, Risks, and Limitations
|
| 88 |
|
| 89 |
+
The dataset may contain biases inherent to the data collection process. These could include:
|
| 90 |
+
- **Geographic Bias:** The data might have been collected in a specific city or country, meaning it may not represent road markings, driving conventions, or road conditions from other parts of the world.
|
| 91 |
+
- **Environmental Bias:** The data was likely collected under specific weather and lighting conditions (e.g., clear, sunny days). Models trained on this data may not perform well in adverse conditions like rain, snow, fog, or at night.
|
| 92 |
+
- **Hardware Bias:** The data is specific to the sensor suite (ABKO APC 850 webcam) used for collection.
|
| 93 |
|
| 94 |
### Recommendations
|
| 95 |
|
| 96 |
+
Users of this dataset should be aware of its limitations. To build a robust system, it is recommended to:
|
| 97 |
+
- Perform extensive testing and validation on a diverse set of real-world data.
|
| 98 |
+
- Employ data augmentation techniques to improve model generalization.
|
| 99 |
+
- Consider fine-tuning models on data from the target domain if it differs significantly from the source domain of this dataset.
|
| 100 |
|
| 101 |
+
## Citation
|
| 102 |
|
| 103 |
+
If you use this dataset in your research, please consider citing it.
|
|
|
|
|
|
|
| 104 |
|
| 105 |
**BibTeX:**
|
| 106 |
|
| 107 |
+
@misc{highsky7_2024_bevlane,
|
| 108 |
+
author = {Highsky7},
|
| 109 |
+
title = {BevLane: A Bird's-Eye-View Lane Segmentation Dataset},
|
| 110 |
+
year = {2024},
|
| 111 |
+
publisher = {Hugging Face},
|
| 112 |
+
journal = {Hugging Face repository},
|
| 113 |
+
howpublished = {\url{[https://huggingface.co/datasets/Highsky7/Topview_Lane](https://huggingface.co/datasets/Highsky7/Topview_Lane)}}
|
| 114 |
+
}
|
| 115 |
|
| 116 |
**APA:**
|
| 117 |
|
| 118 |
+
Highsky7. (2024). BevLane: A Bird's-Eye-View Lane Segmentation Dataset. Hugging Face. Retrieved from https://huggingface.co/datasets/Highsky7/Topview_Lane
|
|
|
|
|
|
|
| 119 |
|
| 120 |
+
## Glossary
|
| 121 |
|
| 122 |
+
BEV (Bird's-Eye-View): A top-down view of a scene, as if viewed from directly above.
|
| 123 |
|
| 124 |
+
IPM (Inverse Perspective Mapping): A mathematical transformation used to convert an image from a perspective view to a top-down (BEV) view.
|
| 125 |
|
| 126 |
+
Lane Segmentation: The task of identifying and classifying pixels in an image that belong to lane markings.
|
| 127 |
|
| 128 |
+
## Dataset Card Authors
|
| 129 |
|
| 130 |
+
This dataset card was created by Highsky7.
|
| 131 |
|
| 132 |
## Dataset Card Contact
|
| 133 |
|
| 134 |
+
For any questions or feedback regarding this dataset, please contact Highsky7 on the Hugging Face platform or send an email to albert31115@gmail.com
|