Randomized Quantization: A Generic Augmentation for Data Agnostic Self-supervised Learning
- Huimin Wu ,
- Chenyang Lei ,
- Xiao Sun ,
- Pengju Wang ,
- Qifeng Chen ,
- Kwang-Ting Cheng ,
- Stephen Lin ,
- Zhirong Wu
International Conference on Computer Vision (ICCV) |
Self-supervised representation learning follows a paradigm of withholding some part of the data and tasking the network to predict it from the remaining part. Among many techniques, data augmentation lies at the core for creating the information gap. Towards this end, masking has emerged as a generic and powerful tool where content is withheld along the sequential dimension, e.g., spatial in images, temporal in audio, and syntactic in language. In this paper, we explore the orthogonal channel dimension for generic data augmentation by exploiting precision redundancy. The data for each channel is quantized through a non-uniform quantizer, with the quantized value sampled randomly within randomly sampled quantization bins. From another perspective, quantization is analogous to channel-wise masking, as it removes the information within each bin, but preserves the information across bins. Our approach significantly surpasses existing generic data augmentation methods, while showing on par performance against modality-specific augmentations. We comprehensively evaluate our approach on vision, audio, 3D point clouds, as well as the DABS benchmark which is comprised of various data modalities. The code is available at https://github.com/microsoft/random_quantize.