Amazing technological breakthrough possible @S-Logix

Office Address

  • 2nd Floor, #7a, High School Road, Secretariat Colony Ambattur, Chennai-600053 (Landmark: SRM School) Tamil Nadu, India
  • +91- 81240 01111

Social List

Zero-shot video object segmentation with co-attention siamese networks - 2020

Zero-Shot Video Object Segmentation With Co-Attention Siamese Networks

Research Area:  Machine Learning


We introduce a novel network, called CO-attention Siamese Network (COSNet), to address the zero-shot video object segmentation task in a holistic fashion. We exploit the inherent correlation among video frames and incorporate a global co-attention mechanism to further improve the state-of-the-art deep learning based solutions that primarily focus on learning discriminative foreground representations over appearance and motion in short-term temporal segments. The co-attention layers in COSNet provide efficient and competent stages for capturing global correlations and scene context by jointly computing and appending co-attention responses into a joint feature space. COSNet is a unified and end-to-end trainable framework where different co-attention variants can be derived for capturing diverse properties of the learned joint feature space. We train COSNet with pairs (or groups) of video frames, and this naturally augments training data and allows increased learning capacity. During the segmentation stage, the co-attention model encodes useful information by processing multiple reference frames together, which is leveraged to infer the frequently reappearing and salient foreground objects better. Our extensive experiments over three large benchmarks demonstrate that COSNet outperforms the current alternatives by a large margin. Our algorithm implementations have been made publicly available at


Author(s) Name:  Xiankai Lu; Wenguan Wang; Jianbing Shen; David Crandall; Jiebo Luo

Journal name:  IEEE Transactions on Pattern Analysis and Machine Intelligence ( Early Access )

Conferrence name:  

Publisher name:  IEEE

DOI:  10.1109/TPAMI.2020.3040258

Volume Information:  Page(s): 1 - 1