Poster
Rethinking Learnable Tree Filter for Generic Feature Transform
Lin Song · Yanwei Li · Zhengkai Jiang · Zeming Li · Xiangyu Zhang · Hongbin Sun · Jian Sun · Nanning Zheng
Poster Session 2 #731
Keywords: [ Human or Animal Learning ] [ Neuroscience and Cognitive Science ] [ Neuroscience and Cognitive Science -> Memory; Optimization -> Combinatorial Optimization; Optimization ] [ Submodular Optimizati ]
The Learnable Tree Filter presents a remarkable approach to model structure-preserving relations for semantic segmentation. Nevertheless, the intrinsic geometric constraint forces it to focus on the regions with close spatial distance, hindering the effective long-range interactions. To relax the geometric constraint, we give the analysis by reformulating it as a Markov Random Field and introduce a learnable unary term. Besides, we propose a learnable spanning tree algorithm to replace the original non-differentiable one, which further improves the flexibility and robustness. With the above improvements, our method can better capture long range dependencies and preserve structural details with linear complexity, which is extended to several vision tasks for more generic feature transform. Extensive experiments on object detection/instance segmentation demonstrate the consistent improvements over the original version. For semantic segmentation, we achieve leading performance (82.1% mIoU) on the Cityscapes benchmark without bells-and whistles. Code is available at https://github.com/StevenGrove/LearnableTreeFilterV2.