Centering the Value of Every Modality: Towards Efficient and Resilient Modality-agnostic Semantic Segmentation
ECCV 2024

Abstract

Fusing an arbitrary number of modalities is vital for achieving robust multi-modal fusion of semantic segmentation yet remains less explored to date. Recent endeavors regard RGB modality as the center and the others as the auxiliary, yielding an asymmetric architecture with two branches. However, the RGB modality may struggle in certain circumstances, e.g., nighttime, while others, e.g., event data, own their merits; thus, it is imperative for the fusion model to discern robust and fragile modalities, and incorporate the most robust and fragile ones to learn a resilient multi-modal framework. To this end, we propose a novel method, named MAGIC, that can be flexibly paired with various backbones, ranging from compact to high-performance models. Our method comprises two key plug-and-play modules. Firstly, we introduce a multi-modal aggregation module to efficiently process features from multi-modal batches and extract complementary scene information. On top, a unified arbitrary-modal selection module is proposed to utilize the aggregated features as the benchmark to rank the multi-modal features based on the similarity scores. This way, our method can eliminate the dependence on RGB modality and better overcome sensor failures while ensuring the segmentation performance. Under the commonly considered multi-modal setting, our method achieves state-of-the-art performance while reducing the model parameters by 60%. Moreover, our method is superior in the novel modality-agnostic setting, where it outperforms prior arts by a large margin of +19.41% mIoU.

Demo Video

Here is a demo video for the proposed MAGIC.

Overall framework of our MAGIC

Overall framework of our MAGIC framework, incorporates plug-and-play multi-modal aggregation and arbitrary-modal selection modules.

vis_res

Results

Visualization of arbitrary inputs using {RGB, Depth, Event, LiDAR} on DELIVER.

vis_res

BibTeX

@article{zheng2024MAGIC,
  title={Centering the Value of Every Modality: Towards Efficient and Resilient Modality-agnostic Semantic Segmentation},
  author={Zheng, Xu and Lyu, Yuanhuiyi and Zhou, Jiazhou and Wang, Lin},
  journal={ECCV},
  year={2024}
}