Poster
in
Datasets and Benchmarks: Dataset and Benchmark Poster Session 4
SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation
Robin Chan · Krzysztof Lis · Svenja Uhlemeyer · Hermann Blum · Sina Honari · Roland Siegwart · Pascal Fua · Mathieu Salzmann · Matthias Rottmann
State-of-the-art semantic or instance segmentation deep neural networks (DNNs) are usually trained on a closed set of semantic classes. As such, they are ill-equipped to handle previously-unseen objects. However, detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving, especially if they appear on the road ahead. While some methods have tackled the tasks of anomalous or out-of-distribution object segmentation, progress remains slow, in large part due to the lack of solid benchmarks; existing datasets either consist of synthetic data, or suffer from label inconsistencies. In this paper, we bridge this gap by introducing the "SegmentMeIfYouCan" benchmark. Our benchmark addresses two tasks: Anomalous object segmentation, which considers any previously-unseen object category; and road obstacle segmentation, which focuses on any object on the road, may it be known or unknown.We provide two corresponding datasets together with a test suite performing an in-depth method analysis, considering both established pixel-wise performance metrics and recent component-wise ones, which are insensitive to object sizes. We empirically evaluate multiple state-of-the-art baseline methods, including several models specifically designed for anomaly / obstacle segmentation, on our datasets and on public ones, using our test suite.The anomaly and obstacle segmentation results show that our datasets contribute to the diversity and difficulty of both data landscapes.