Cube Padding for Weakly-Supervised Saliency Prediction in 360° Videos

Hsien-Tzu Cheng1Chun-Hung Chao1Jin-Dong Dong1Hao-Kai Wen2Tyng-Luh Liu3Min Sun1

National Tsing Hua University1  Taiwan AI Labs2  Academia Sinica3

Abstract

Automatic saliency prediction in 360° videos is critical for viewpoint guidance applications (e.g., Facebook 360 Guide). We propose a spatial-temporal network which is (1) weakly-supervised trained and (2) tailor-made for 360° viewing sphere. Note that most existing methods are less scalable since they rely on annotated saliency map for training. Most importantly, they convert 360° sphere to 2D images (e.g., a single equirectangular image or multiple separate Normal Field-of-View (NFoV) images) which introduces distortion and image boundaries. In contrast, we propose a simple and effective Cube Padding (CP) technique as follows. Firstly, we render the 360° view on six faces of a cube using perspective projection. Thus, it introduces very little distortion. Then, we concatenate all six faces while utilizing the connectivity between faces on the cube for image padding (i.e., Cube Padding) in convolution, pooling, convolutional LSTM layers. In this way, CP introduces no image boundary while being applicable to almost all Convolutional Neural Network (CNN) structures. To evaluate our method, we propose Wild-360, a new 360° video saliency dataset, containing challenging videos with saliency heatmap annotations. In experiments, our method outperforms baseline methods in both speed and quality.


[Paper (arXiv)]

[Wild-360 Dataset]

[Source Code]


Our Method

Visualization of our system. Panel (a) shows our static model: (1) the pre-process to project an equirectangular image I to a cubemap image I, (2) the CNN with Cube Padding (CP) to extract a saliency feature Ms, (3) the post-process to convert Ms into an equirectangular saliency map OS. Panel (b) shows our temporal model: (1) the convLSTM with CP to aggregate the saliency feature Ms through time into H, (2) the post-process to convert H into an equirectangular saliency map O, (3) our self-supervised loss function to compute Lt given current Ot and previous Ot−1. Panel (c) shows the total loss to be minimized. Panel (d) shows the post-process module including a max-pooling, inverse projection (P−1 ), and upsampling (U). Panel (e) shows the pre-processing module with cubemap projection.
Illustration of Cube Padding (CP). In panel (a), we apply CP for the face F which leverages information (in yellow rectangles) on face T,L,R,D naturally rather than padding with zero values (i.e., zero padding). Panel (b) shows the cubemap matric representation M ∈ R6×c×w×w within a batch, which can be processed throughout the entire network, efficiently connecting each faces on the fly. Panel (c) shows how to fold the faces back to a cube.
Feature map visualization from VGG Conv5 3 layer. When Cube Padding (CP) is used (the first row), the response continuous through the face boundaries. However, when Zero Padding (ZP) is used (the second row), the responses near the boundaries vanished since each face is processed locally and separately. The last row shows the corresponding cubemap images containing several marine creatures across face boundaries.

Result Videos


Qualitative result

Supplementary material