Sensor data recorded by camera, radar, and lidar on millions of test kilometers must be enriched with billions of required attributes, such as classification of static and dynamic objects and dynamic agents or object tracking information, to provide reliable ground truth data for training deep neural networks (DNNs) and for developing and testing autonomous vehicles. When experts have to annotate data manually, it becomes a cumbersome and demanding task, which makes it expensive, time-consuming, and error prone. Furthermore, manual labeling can often lead to inconsistent results.
The web-based annotation service provided by UAI enables manual and artificial intelligence (AI)-powered automated annotation. Clever use of different automation strategies such as world coordinates, point cloud merging, or automatic box propagation accelerates the complete process and leads to more accurate and consistent results. It allows for a high automation ratio of up to 40% for camera images, and approximately 70% for 3-D data sensor data. The solution covers 2-D bounding boxes, polyline annotation, and full-frame pixel-wise semantic segmentation on video streams and images as well as the option to create cuboids in a 3-D space for labeling radar and lidar data.
The hotkey-optimized 3-D annotation provides fast point cloud processing and other features, such as multiviews without perspective distortion or fused point cloud and cam view for precise labeling or easy object identification. The annotation service also includes data anonymization and an adjustable user interface. The consistently available SaaS solution is continuously updated and optimized for minimal loading time.
High-quality, high-throughput annotation tooling based on Zero-Touch Annotation™ Automation.
Drive innovation forward. Always on the pulse of technology development.
Subscribe to our expert knowledge. Learn from our successful project examples. Keep up to date on simulation and validation. Subscribe to/manage dSPACE direct and aerospace now.