# selfsupervised_representation_learning_with_relative_predictive_coding__e3be5435.pdf Published as a conference paper at ICLR 2021 SELF-SUPERVISED REPRESENTATION LEARNING WITH RELATIVE PREDICTIVE CODING Yao-Hung Hubert Tsai1, Martin Q. Ma1, Muqiao Yang1, Han Zhao23, Louis-Philippe Morency1, Ruslan Salakhutdinov1 1Carnegie Mellon University, 2D.E. Shaw & Co., 3 University of Illinois at Urbana-Champaign This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance. The key to the success of RPC is two-fold. First, RPC introduces the relative parameters to regularize the objective for boundedness and low variance. Second, RPC contains no logarithm and exponential score functions, which are the main cause of training instability in prior contrastive objectives. We empirically verify the effectiveness of RPC on benchmark vision and speech self-supervised learning tasks. Lastly, we relate RPC with mutual information (MI) estimation, showing RPC can be used to estimate MI with low variance 1. 1 INTRODUCTION Unsupervised learning has drawn tremendous attention recently because it can extract rich representations without label supervision. Self-supervised learning, a subset of unsupervised learning, learns representations by allowing the data to provide supervision (Devlin et al., 2018). Among its mainstream strategies, self-supervised contrastive learning has been successful in visual object recognition (He et al., 2020; Tian et al., 2019; Chen et al., 2020c), speech recognition (Oord et al., 2018; Rivi ere et al., 2020), language modeling (Kong et al., 2019), graph representation learning (Velickovic et al., 2019) and reinforcement learning (Kipf et al., 2019). The idea of self-supervised contrastive learning is to learn latent representations such that related instances (e.g., patches from the same image; defined as positive pairs) will have representations within close distance, while unrelated instances (e.g., patches from two different images; defined as negative pairs) will have distant representations (Arora et al., 2019). Prior work has formulated the contrastive learning objectives as maximizing the divergence between the distribution of related and unrelated instances. In this regard, different divergence measurement often leads to different loss function design. For example, variational mutual information (MI) estimation (Poole et al., 2019) inspires Contrastive Predictive Coding (CPC) (Oord et al., 2018). Note that MI is also the KL-divergence between the distributions of related and unrelated instances (Cover & Thomas, 2012). While the choices of the contrastive learning objectives are abundant (Hjelm et al., 2018; Poole et al., 2019; Ozair et al., 2019), we point out that there are three challenges faced by existing methods. The first challenge is the training stability, where an unstable training process with high variance may be problematic. For example, Hjelm et al. (2018); Tschannen et al. (2019); Tsai et al. (2020b) show that the contrastive objectives with large variance cause numerical issues and have a poor downstream performance with their learned representations. The second challenge is the sensitivity to minibatch size, where the objectives requiring a huge minibatch size may restrict their practical usage. For instance, Sim CLRv2 (Chen et al., 2020c) utilizes CPC as its contrastive objective and reaches state-of-the-art performances on multiple self-supervised and semi-supervised benchmarks. Nonetheless, the objective is trained with a minibatch size of 8, 192, and this scale of training requires enormous computational power. The third challenge is the downstream task performance, which is the one that we would like to emphasize the most. For this reason, in most cases, CPC 1Project page: https://github.com/martinmamql/relative_predictive_coding Published as a conference paper at ICLR 2021 Table 1: Different contrastive learning objectives, grouped by measurements of distribution divergence. PXY represents the distribution of related samples (positively-paired), and PXPY represents the distribution of unrelated samples (negatively-paired). f(x, y) F for F being any class of functions f : X Y R. : Compared to JCPC and JRPC, we empirically find JWPC performs worse on complex real-world image datasets spanning CIFAR-10/-100 (Krizhevsky et al., 2009) and Image Net (Russakovsky et al., 2015). Objective Good Training Stability Lower Minibatch Size Sensitivity Good Downstream Performance relating to KL-divergence between PXY and PXPY : JDV (Donsker & Varadhan, 1975), JNWJ (Nguyen et al., 2010), and JCPC (Oord et al., 2018) JDV(X, Y ) := supf F EPXY [f(x, y)] log(EPXPY [ef(x,y)]) JNWJ(X, Y ) := supf F EPXY [f(x, y)] EPXPY [ef(x,y) 1] JCPC(X, Y ) := supf F E(x,y1) PXY ,{yj}N j=2 PY h log ef(x,y1)/ 1 N PN j=1 ef(x,yj) i relating to JS-divergence between PXY and PXPY : JJS (Nowozin et al., 2016) JJS(X, Y ) := supf F EPXY [ log(1 + e f(x,y))] EPXPY [log(1 + ef(x,y))] relating to Wasserstein-divergence between PXY and PXPY : JWPC (Ozair et al., 2019), with FL denoting the space of 1-Lipschitz functions JWPC(X, Y ) := supf FL E(x,y1) PXY ,{yj}N j=2 PY h log ef(x,y1)/ 1 N PN j=1 ef(x,yj) i relating to χ2-divergence between PXY and PXPY : JRPC (ours) JRPC(X, Y ) := supf F EPXY [f(x, y)] αEPXPY [f(x, y)] β 2 EPXY f 2(x, y) γ 2 EPXPY f 2(x, y) is the objective that we would adopt for contrastive representation learning, due to its favorable performance in downstream tasks (Tschannen et al., 2019; Baevski et al., 2020). This paper presents a new contrastive representation learning objective: the Relative Predictive Coding (RPC), which attempts to achieve a good balance among these three challenges: training stability, sensitivity to minibatch size, and downstream task performance. At the core of RPC is the relative parameters, which are used to regularize RPC for its boundedness and low variance. From a modeling perspective, the relative parameters act as a ℓ2 regularization for RPC. From a statistical perspective, the relative parameters prevent RPC from growing to extreme values, as well as upper bound its variance. In addition to the relative parameters, RPC contains no logarithm and exponential, which are the main cause of the training instability for prior contrastive learning objectives (Song & Ermon, 2019). To empirically verify the effectiveness of RPC, we consider benchmark self-supervised representation learning tasks, including visual object classification on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and Image Net (Russakovsky et al., 2015) and speech recognition on Libri Speech (Panayotov et al., 2015). Comparing RPC to prior contrastive learning objectives, we observe a lower variance during training, a lower minibatch size sensitivity, and consistent performance improvement. Lastly, we also relate RPC with MI estimation, empirically showing that RPC can estimate MI with low variance. 2 PROPOSED METHOD This paper presents a new contrastive representation learning objective - the Relative Predictive Coding (RPC). At a high level, RPC 1) introduces the relative parameters to regularize the objective for boundedness and low variance; and 2) achieves a good balance among the three challenges in the contrastive representation learning objectives: training stability, sensitivity to minibatch size, and downstream task performance. We begin by describing prior contrastive objectives along with their limitations on the three challenges in Section 2.1. Then, we detail our presented objective and its modeling benefits in Section 2.2. An overview of different contrastive learning objectives is provided in Table 1. We defer all the proofs in Appendix. Notation We use an uppercase letter to denote a random variable (e.g., X), a lower case letter to denote the outcome of this random variable (e.g., x), and a calligraphy letter to denote the sample space of this random variable (e.g., X). Next, if the samples (x, y) are related (or positively-paired), we refer (x, y) PXY with PXY being the joint distribution of X Y . If the samples (x, y) are unrelated (negatively-paired), we refer (x, y) PXPY with PXPY being the product of marginal distributions over X Y . Last, we define f F for F being any class of functions f : X Y R. 2.1 PRELIMINARY Contrastive representation learning encourages the contrastiveness between the positive and the negative pairs of the representations from the related data X and Y . Specifically, when sampling a pair Published as a conference paper at ICLR 2021 of representations (x, y) from their joint distribution ((x, y) PXY ), this pair is defined as a positive pair; when sampling from the product of marginals ((x, y) PXPY ), this pair is defined as a negative pair. Then, Tsai et al. (2020b) formalizes this idea such that the contrastiveness of the representations can be measured by the divergence between PXY and PXPY , where higher divergence suggests better contrastiveness. To better understand prior contrastive learning objectives, we categorize them in terms of different divergence measurements between PXY and PXPY , with their detailed objectives presented in Table 1. We instantiate the discussion using Contrastive Predictive Coding (Oord et al., 2018, JCPC), which is a lower bound of DKL(PXY PXPY ) with DKL referring to the KL-divergence: JCPC(X, Y ) := sup f F E(x,y1) PXY ,{yj}N j=2 PY h log ef(x,y1) 1 N PN j=1 ef(x,yj) Then, Oord et al. (2018) presents to maximize JCPC(X, Y ), so that the learned representations X and Y have high contrastiveness. We note that JCPC has been commonly used in many recent self-supervised representation learning frameworks (He et al., 2020; Chen et al., 2020b), where they constrain the function to be f(x, y) = cosine(x, y) with cosine( ) being cosine similarity. Under this function design, maximizing JCPC leads the representations of related pairs to be close and representations of unrelated pairs to be distant. The category of modeling DKL(PXY PXPY ) also includes the Donsker-Varadhan objective (JDV (Donsker & Varadhan, 1975; Belghazi et al., 2018)) and the Nguyen-Wainright-Jordan objective (JNWJ (Nguyen et al., 2010; Belghazi et al., 2018)), where Belghazi et al. (2018); Tsai et al. (2020b) show that JDV(X, Y ) = JNWJ(X, Y ) = DKL(PXY PXPY ). The other divergence measurements considered in prior work are DJS(PXY PXPY ) (with DJS referring to the Jenson-Shannon divergence) and DWass(PXY PXPY ) (with DWass referring to the Wassersteindivergence). The instance of modeling DJS(PXY PXPY ) is the Jensen-Shannon f-GAN objective JJS (Nowozin et al., 2016; Hjelm et al., 2018) , where JJS(X, Y ) = 2 DJS(PXY PXPY ) log 2 .2 The instance of modeling DWass(PXY PXPY ) is the Wasserstein Predictive Coding JWPC (Ozair et al., 2019) , where JWPC(X, Y ) modifies JCPC(X, Y ) objective (equation 1) by searching the function from F to FL. FL denotes any class of 1-Lipschitz continuous functions from (X Y) to R, and thus FL F. Ozair et al. (2019) shows that JWPC(X, Y ) is the lower bound of both DKL(PXY PXPY ) and DWass(PXY PXPY ). See Table 1 for all the equations. To conclude, the contrastive representation learning objectives are unsupervised representation learning methods that maximize the distribution divergence between PXY and PXPY . The learned representations cause high contrastiveness, and recent work (Arora et al., 2019; Tsai et al., 2020a) theoretically show that highly-contrastive representations could improve the performance on downstream tasks. After discussing prior contrastive representation learning objectives, we point out three challenges in their practical deployments: training stability, sensitivity to minibatch training size, and downstream task performance. In particular, the three challenges can hardly be handled well at the same time, where we highlight the conclusions in Table 1. Training Stability: The training stability highly relates to the variance of the objectives, where Song & Ermon (2019) shows that JDV and JNWJ exhibit inevitable high variance due to their inclusion of exponential function. As pointed out by Tsai et al. (2020b), JCPC, JWPC, and JJS have better training stability because JCPC and JWPC can be realized as a multi-class classification task and JJS can be realized as a binary classification task. The cross-entropy loss adopted in JCPC, JWPC, and JJS is highly-optimized and stable in existing optimization package (Abadi et al., 2016; Paszke et al., 2019). Sensitivity to minibatch training size: Among all the prior contrastive representation learning methods, JCPC is known to be sensitive to the minibatch training size (Ozair et al., 2019). Taking a closer look at equation 1, JCPC deploys an instance selection such that y1 should be selected from {y1, y2, , y N}, with (x, y1) PXY , (x, yj>1) PXPY with N being the minibatch size. Previous work (Poole et al., 2019; Song & Ermon, 2019; Chen et al., 2020b; Caron et al., 2020) showed that a large N results in a more challenging instance selection and forces JCPC to have a better contrastiveness of y1 (related instance for x) against {yj}N j=2 (unrelated instance for x). JDV, JNWJ, and JJS do not consider 2JJS(X, Y ) achieves its supreme value when f (x, y) = log(p(x, y)/p(x)p(y)) (Tsai et al., 2020b). Plugin f (x, y) into JJS(X, Y ), we can conclude JJS(X, Y ) = 2(DJS(PXY PXPY ) log 2). Published as a conference paper at ICLR 2021 the instance selection, and JWPC reduces the minibatch training size sensitivity by enforcing 1Lipschitz constraint. Downstream Task Performance: The downstream task performance is what we care the most among all the three challenges. JCPC has been the most popular objective as it manifests superior performance over the other alternatives (Tschannen et al., 2019; Tsai et al., 2020b;a). We note that although JWPC shows better performance on Omniglot (Lake et al., 2015) and Celeb A (Liu et al., 2015) datasets, we empirically find it not generalizing well to CIFAR-10/- 100 (Krizhevsky et al., 2009) and Image Net (Russakovsky et al., 2015). 2.2 RELATIVE PREDICTIVE CODING In this paper, we present Relative Predictive Coding (RPC), which achieves a good balance among the three challenges mentioned above: JRPC(X, Y ) := sup f F EPXY [f(x, y)] αEPXPY [f(x, y)] β 2 EPXY f 2(x, y) γ 2 EPXPY f 2(x, y) , (2) where α > 0, β > 0, γ > 0 are hyper-parameters and we define them as relative parameters. Intuitively, JRPC contains no logarithm or exponential, potentially preventing unstable training due to numerical issues. Now, we discuss the roles of α, β, γ. At a first glance, α acts to discourage the scores of PXY and PXPY from being close, and β/γ acts as a ℓ2 regularization coefficient to stop f from becoming large. For a deeper analysis, the relative parameters act to regularize our objective for boundedness and low variance. To show this claim, we first present the following lemma: Lemma 1 (Optimal Solution for JRPC) Let r(x, y) = p(x,y) p(x)p(y) be the density ratio. JRPC has the optimal solution f (x, y) = r(x,y) α β r(x,y)+γ := rα,β,γ(x, y) with α Lemma 1 suggests that JRPC achieves its supreme value at the ratio rα,β,γ(x, y) indexed by the relative parameters α, β, γ (i.e., we term rα,β,γ(x, y) as the relative density ratio). We note that rα,β,γ(x, y) is an increasing function w.r.t. r(x, y) and is nicely bounded even when r(x, y) is large. We will now show that the bounded rα,β,γ suggests the empirical estimation of JRPC has boundeness and low variance. In particular, let {xi, yi}n i=1 be n samples drawn uniformly at random from PXY and {x j, y j}m j=1 be m samples drawn uniformly at random from PXPY . Then, we use neural networks to empirically estimate JRPC as ˆJm,n RPC: Definition 1 ( ˆJm,n RPC, empirical estimation of JRPC) We parametrize f via a family of neural networks FΘ := {fθ : θ Θ Rd} where d N and Θ is compact. Then, ˆJm,n RPC = supfθ FΘ 1 n Pn i=1 fθ(xi, yi) 1 m Pm j=1 αfθ(x j, y j) 1 n Pn i=1 β 2 f 2 θ (xi, yi) 1 m Pm j=1 γ 2 f 2 θ (x j, y j). Proposition 1 (Boundedness of ˆJm,n RPC, informal) 0 JRPC 1 2β + α2 2γ . Then, with probability at least 1 δ, |JRPC ˆJm,n RPC| = O( q d+log (1/δ) n ), where n = min {n, m}. Proposition 2 (Variance of ˆJm,n RPC, informal) There exist universal constants c1 and c2 that depend only on α, β, γ, such that Var[ ˆJm,n RPC] = O c1 From the two propositions, when m and n are large, i.e., the sample sizes are large, ˆJm,n RPC is bounded, and its variance vanishes to 0. First, the boundedness of ˆJm,n RPC suggests ˆJm,n RPC will not grow to extremely large or small values. Prior contrastive learning objectives with good training stability (e.g., JCPC/JJS/JWPC) also have the boundedness of their objective values. For instance, the empirical estimation of JCPC is less than log N (equation 1) (Poole et al., 2019). Nevertheless, JCPC often performs the best only when minibatch size is large, and empirical performances of JJS and JWPC are not as competitive as JCPC. Second, the upper bound of the variance implies the training of ˆJm,n RPC can be stable, and in practice we observe a much smaller value than the stated upper bound. On the contrary, Song & Ermon (2019) shows that the empirical estimations of JDV and JNWJ exhibit inevitable variances that grow exponentially with the true DKL(PXY PXPY ). Lastly, similar to prior contrastive learning objective that are related to distribution divergence measurement, we associate JRPC with the Chi-square divergence Dχ2(PXY PXPY ) = Published as a conference paper at ICLR 2021 EPXPY [r2(x, y)] 1 (Nielsen & Nock, 2013). The derivations are provided in Appendix. By having P = β β+γ PXY + γ β+γ PXPY as the mixture distribution of PXY and PXPY , we can rewrite JRPC(X, Y ) as JRPC(X, Y ) = β+γ 2 EP [r2 α,β,γ(x, y)]. Hence, JRPC can be regarded as a generalization of Dχ2 with the relative parameters α, β, γ, where Dχ2 can be recovered from JRPC by specializing α = 0, β = 0 and γ = 1 (e.g., Dχ2 = 2JRPC|α=β=0,γ=1 1). Note that JRPC may not be a formal divergence measure with arbitrary α, β, γ. 3 EXPERIMENTS We provide an overview of the experimental section. First, we conduct benchmark self-supervised representation learning tasks spanning visual object classification and speech recognition. This set of experiments are designed to discuss the three challenges of the contrastive representation learning objectives: downstream task performance (Section 3.1), training stability (Section 3.2), and minibatch size sensitivity (Section 3.3). We also provide an ablation study on the choices of the relative parameters in JRPC (Section 3.4). On these experiments we found that JRPC achieves a lower variance during training, a lower batch size insensitivity, and consistent performance improvement. Second, we relate JRPC with mutual information (MI) estimation (Section 3.5). The connection is that MI is an average statistic of the density ratio, and we have shown that the optimal solution of JRPC is the relative density ratio (see Lemma 1). Thus we could estimate MI using the density ratio transformed from the optimal solution of JRPC. On these two sets of experiments, we fairly compare JRPC with other contrastive learning objectives. Particularly, across different objectives, we fix the network, learning rate, optimizer, and batch size (we use the default configurations suggested by the original implementations from Chen et al. (2020c), Rivi ere et al. (2020) and Tsai et al. (2020b).) The only difference will be the objective itself. In what follows, we perform the first set of experiments. We defer experimental details in the Appendix. Datasets. For the visual objective classification, we consider CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and Image Net (Russakovsky et al., 2015). CIFAR-10/-100 and Image Net contain labeled images only, while STL-10 contains labeled and unlabeled images. For the speech recognition, we consider Libri Speech-100h (Panayotov et al., 2015) dataset, which contains 100 hours of 16k Hz English speech from 251 speakers with 41 types of phonemes. Training and Evaluation Details. For the vision experiments, we follow the setup from Sim CLRv2 (Chen et al., 2020c), which considers visual object recognition as its downstream task. For the speech experiments, we follow the setup from prior work (Oord et al., 2018; Rivi ere et al., 2020), which consider phoneme classification and speaker identification as the downstream tasks. Then, we briefly discuss the training and evaluation details into three modules: 1) related and unrelated data construction, 2) pre-training, and 3) fine-tuning and evaluation. For more details, please refer to Appendix or the original implementations. Related and Unrelated Data Construction. In the vision experiment, we construct the related images by applying different augmentations on the same image. Hence, when (x, y) PXY , x and y are the same image with different augmentations. The unrelated images are two randomly selected samples. In the speech experiment, we define the current latent feature (feature at time t) and the future samples (samples at time > t) as related data. In other words, the feature in the latent space should contain information that can be used to infer future time steps. A latent feature and randomly selected samples would be considered as unrelated data. Pre-training. The pre-training stage refers to the self-supervised training by a contrastive learning objective. Our training objective is defined in Definition 1, where we use neural networks to parametrize the function using the constructed related and unrelated data. Convolutional neural networks are used for vision experiments. Transformers (Vaswani et al., 2017) and LSTMs (Hochreiter & Schmidhuber, 1997) are used for speech experiments. Fine-tuning and Evaluation. After the pre-training stage, we fix the parameters in the pre-trained networks and add a small fine-tuning network on top of them. Then, we fine-tune this small network with the downstream labels in the data s training split. For the fine-tuning network, both vision and speech experiments consider multi-layer perceptrons. Last, we evaluate the fine-tuned representations on the data s test split. We would like to point out that we do not normalize the hidden representations encoded by the pre-training neural network for loss calculation. This hidden nor- Published as a conference paper at ICLR 2021 Table 2: Top-1 accuracy (%) for visual object recognition results. JDV and JNWJ are not reported on Image Net due to numerical instability. Res Net depth, width and Selective Kernel (SK) configuration for each setting are provided in Res Net depth+width+SK column. A slight drop of JCPC performance compared to Chen et al. (2020c) is because we only train for 100 epochs rather than 800 due to the fact that running 800 epochs uninterruptedly on cloud TPU is very expensive. Also, we did not employ a memory buffer (He et al., 2020) to store negative samples. We and we did not employ a memory buffer. We also provide the results from fully supervised models as a comparison (Chen et al., 2020b;c). Fully supervised training performs worse on STL-10 because it does not employ the unlabeled samples in the dataset (L owe et al., 2019). Dataset Res Net Depth+Width+SK Self-supervised Supervised JDV JNWJ JJS JWPC JCPC JRPC CIFAR-10 18 + 1 + No SK 91.10 90.54 83.55 80.02 91.12 91.46 93.12 CIFAR-10 50 + 1 + No SK 92.23 92.67 87.34 85.93 93.42 93.57 95.70 CIFAR-100 18 + 1 + No SK 77.10 77.27 74.02 72.16 77.36 77.98 79.11 CIFAR-100 50 + 1 + No SK 79.02 78.52 75.31 73.23 79.31 79.89 81.20 STL-10 50 + 1 + No SK 82.25 81.17 79.07 76.50 83.40 84.10 71.40 Image Net 50 + 1 + SK - - 66.21 62.10 73.48 74.43 78.50 Image Net 152 + 2 + SK - - 71.12 69.51 77.80 78.40 80.40 Table 3: Accuracy (%) for Libri Speech-100h phoneme and speaker classification results. We also provide the results from fully supervised model as a comparison (Oord et al., 2018). Task Name Self-supervised Supervised JCPC JDV JNWJ JRPC Phoneme classification 64.6 61.27 62.09 69.39 74.6 Speaker classification 97.4 95.36 95.89 97.68 98.5 malization technique is widely applied (Tian et al., 2019; Chen et al., 2020b;c) to stabilize training and increase performance for prior objectives, but we find it unnecessary in JRPC. 3.1 DOWNSTREAM TASK PERFORMANCES ON VISION AND SPEECH For the downstream task performance in the vision domain, we test the proposed JRPC and other contrastive learning objectives on CIFAR-10/-100 (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), and Image Net ILSVRC-2012 (Russakovsky et al., 2015). Here we report the best performances JRPC can get on each dataset (we include experimental details in A.7.) Table 2 shows that the proposed JRPC outperforms other objectives on all datasets. Using JRPC on the largest network (Res Net with depth of 152, channel width of 2 and selective kernels), the performance jumps from 77.80% of JCPC to 78.40% of JRPC. Regarding speech representation learning, the downstream performance for phoneme and speaker classification are shown in Table 3 (we defer experimental details in Appendix A.9.) Compared to JCPC, JRPC improves the phoneme classification results with 4.8 percent and the speaker classification results with 0.3 percent, which is closer to the fully supervised model. Overall, the proposed JRPC performs better than other unsupervised learning objectives on both phoneme classification and speaker classification tasks. 3.2 TRAINING STABILITY We provide empirical training stability comparisons on JDV, JNWJ, JCPC and JRPC by plotting the values of the objectives as the training step increases. We apply the four objectives to the Sim CLRv2 framework and train on the CIFAR-10 dataset. All setups of training are exactly the same except the objectives. From our experiments, JDV and JNWJ soon explode to Na N and disrupt training (shown as early stopping in Figure 1a; extremely large values are not plotted due to scale constraints). On the other hand, JRPC and JCPC has low variance, and both enjoy stable training. As a result, performances using the representation learned from unstable JDV and JNWJ suffer in downstream task, while representation learned by JRPC and JCPC work much better. Published as a conference paper at ICLR 2021 Figure 1: (a) Empirical values of JDV, JNWJ, JCPC and JRPC performing visual object recognition on CIFAR-10. JDV and JNWJ soon explode to Na N values and stop the training (shown as early stopping in the figure), while JCPC and JRPC are more stable. Performance comparison of JCPC and JRPC on (b) CIFAR-10 and (c) Libri Speech-100h with different minibatch sizes, showing that the performance of JRPC is less sensitive to minibatch size change compared to JCPC. 3.3 MINIBATCH SIZE SENSITIVITY We then provide the analysis on the effect of minibatch size on JRPC and JCPC, since JCPC is known to be sensitive to minibatch size (Poole et al., 2019). We train Sim CLRv2 (Chen et al., 2020c) on CIFAR-10 and the model from Rivi ere et al. (2020) on Libri Speech-100h using JRPC and JCPC with different minibatch sizes. The settings of relative parameters are the same as Section 3.2. From Figure 1b and 1c, we can observe that both JRPC and JCPC achieve their optimal performance at a large minibatch size. However, when the minibatch size decreases, the performance of JCPC shows higher sensitivity and suffers more when the number of minibatch samples is small. The result suggests that the proposed method might be less sensitive to the change of minibatch size compared to JCPC given the same training settings. 3.4 EFFECT OF RELATIVE PARAMETERS We study the effect of different combinations of relative parameters in JRPC by comparing downstream performances on visual object recognition. We train Sim CLRv2 on CIFAR-10 with different combinations of α, β and γ in JRPC and fix all other experimental settings. We choose α {0, 0.001, 1.0}, β {0, 0.001, 1.0}, γ {0, 0.001, 1.0} and we report the best performances under each combination of α, β, and γ. From Figure 2, we first observe that α > 0 has better downstream performance than α = 0 when β and γ are fixed. This observation is as expected, since α > 0 encourages representations of related and unrelated samples to be pushed away. Then, we find that a small but nonzero β (β = 0.001) and a large γ (γ = 1.0) give the best performance compared to other combinations. Since β and γ serve as the coefficients of ℓ2 regularization, the results imply that the regularization is a strong and sensitive factor that will influence the performance. The results here are not as competitive as Table 2 because the CIFAR-10 result reported in Table 2 is using a set of relative parameters (α = 1.0, β = 0.005, γ = 1.0) that is different from the combinations in this subsection. Also, we use quite different ranges of γ on Image Net (see A.7 for details.) In conclusion, we find empirically that a non-zero α, a small β and a large γ will lead to the optimal representation for the downstream task on CIFAR-10. 3.5 RELATION TO MUTUAL INFORMATION ESTIMATION The presented approach also closely relates to mutual information estimation. For random variables X and Y with joint distribution PXY and product of marginals PXPY , the mutual information is defined as I(X; Y ) = DKL(PXY PXPY ). Lemma 1 states that given optimal solution f (x, y) of JRPC, we can get the density ratio r(x, y) := p(x, y)/p(x)p(y) as r(x, y) = γ/β+α 1 βf (x,y) γ can empirically estimate ˆr(x, y) from the estimated ˆf(x, y) via this transformation, and use ˆr(x, y) to estimate mutual information (Tsai et al., 2020b). Specifically, I(X; Y ) 1 n Pn i=1 log ˆr(xi, yi) with (xi, yi) P n X,Y , where P n X,Y is the uniformly sampled empirical distribution of PX,Y . Published as a conference paper at ICLR 2021 Figure 2: Heatmaps of downstream task performance on CIFAR-10, using different α, β and γ in the JRPC. We conclude that a nonzero α, a small β (β = 0.001) and a large γ(γ = 1.0) are crucial for better performance. Figure 3: Mutual information estimation performed on 20-d correlated Gaussian distribution, with the correlation increasing each 4K steps. JRPC exhibits smaller variance than SMILE and Do E, and smaller bias than JCPC. We follow prior work (Poole et al., 2019; Song & Ermon, 2019; Tsai et al., 2020b) for the experiments. We consider X and Y as two 20-dimensional Gaussians with correlation ρ, and our goal is to estimate the mutual information I(X; Y ). Then, we perform a cubic transformation on y so that y 7 y3. The first task is referred to as Gaussian task and the second is referred to as Cubic task, where both have the ground truth I(X; Y ) = 10log (1 ρ2). The models are trained on 20, 000 steps with I(X; Y ) starting at 2 and increased by 2 per 4, 000 steps. Our method is compared with baseline methods JCPC (Oord et al., 2018), JNWJ (Nguyen et al., 2010), JJS (Nowozin et al., 2016), SMILE (Song & Ermon, 2019) and Difference of Entropies (Do E) (Mc Allester & Stratos, 2020). All approaches use the same network design, learning rate, optimizer and minibatch size for a fair comparison. First, we observe JCPC (Oord et al., 2018) has the smallest variance, while it exhibits a large bias (the estimated mutual information from JCPC has an upper bound log(batch size)). Second, JNWJ (Nguyen et al., 2010) and JJSD (Poole et al., 2019) have large variances, especially in the Cubic task. Song & Ermon (2019) pointed out the limitations of JCPC, JNWJ, and JJSD, and developed the SMILE method, which clips the value of the estimated density function to reduce the variance of the estimators. Do E (Mc Allester & Stratos, 2020) is neither a lower bound nor a upper bound of mutual information, but can achieve accurate estimates when underlying mutual information is large. JRPC exhibits comparable bias and lower variance compared to the SMILE method, and is more stable than the Do E method. We would like to highlight our method s low-variance property, where we neither clip the values of the estimated density ratio nor impose an upper bound of our estimated mutual information. 4 RELATED WORK As a subset of unsupervised representation learning, self-supervised representation learning (SSL) adopts self-defined signals as supervision and uses the learned representation for downstream tasks, such as object detection and image captioning (Liu et al., 2020). We categorize SSL work into two groups: when the signal is the input s hidden property or the corresponding view of the input. For the first group, for example, Jigsaw puzzle (Noroozi & Favaro, 2016) shuffles the image patches and defines the SSL task for predicting the shuffled positions of the image patches. Other instances are Predicting Rotations (Gidaris et al., 2018) and Shuffle & Learn (Misra et al., 2016). For the second group, the SSL task aims at modeling the co-occurrence of multiple views of data, via the contrastive or the predictive learning objectives (Tsai et al., 2020a). The predictive objectives encourage reconstruction from one view of the data to the other, such as predicting the lower part of an image from Published as a conference paper at ICLR 2021 its upper part (Image GPT by Chen et al. (2020a)). Comparing the contrastive with predictive learning approaches, Tsai et al. (2020a) points out that the former requires less computational resources for a good performance but suffers more from the over-fitting problem. Theoretical analysis (Arora et al., 2019; Tsai et al., 2020a; Tosh et al., 2020) suggests the contrastively learned representations can lead to a good downstream performance. Beyond the theory, Tian et al. (2020) shows what matters more for the performance are 1) the choice of the contrastive learning objective; and 2) the creation of the positive and negative data pairs in the contrastive objective. Recent work (Khosla et al., 2020) extends the usage of contrastive learning from the selfsupervised setting to the supervised setting. The supervised setting defines the positive pairs as the data from the same class in the contrastive objective, while the self-supervised setting defines the positive pairs as the data with different augmentations. Our work also closely rates to the skewed divergence measurement between distributions (Lee, 1999; 2001; Nielsen, 2010; Yamada et al., 2013). Recall that the usage of the relative parameters plays a crucial role to regularize our objective for its boundness and low variance. This idea is similar to the skewed divergence measurement, that when calculating the divergence between distributions P and Q, instead of considering D(P Q), these approaches consider D(P αP + (1 α)Q) with D representing the divergence and 0 < α < 1. A natural example is that the Jensen-Shannon divergence is a symmetric skewed KL divergence: DJS(P Q) = 0.5DKL(P 0.5P + 0.5Q) + 0.5DKL(Q 0.5P + 0.5Q). Compared to the non-skewed counterpart, the skewed divergence has shown to have a more robust estimation for its value (Lee, 1999; 2001; Yamada et al., 2013). Different from these works that focus on estimating the values of distribution divergence, we focus on learning self-supervised representations. 5 CONCLUSION In this work, we present RPC, the Relative Predictive Coding, that achieves a good balance among the three challenges when modeling a contrastive learning objective: training stability, sensitivity to minibatch size, and downstream task performance. We believe this work brings an appealing option for training self-supervised models and inspires future work to design objectives for balancing the aforementioned three challenges. In the future, we are interested in applying RPC in other application domains and developing more principled approaches for better representation learning. ACKNOWLEDGEMENT This work was supported in part by the NSF IIS1763562, NSF Awards #1750439 #1722822, National Institutes of Health, IARPA D17PC00340, ONR Grant N000141812861, and Facebook Ph D Fellowship. We would also like to acknowledge NVIDIA s GPU support and Cloud TPU support from Google s Tensor Flow Research Cloud (TFRC). Mart ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for largescale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pp. 265 283, 2016. Martin Anthony and Peter L Bartlett. Neural network learning: Theoretical foundations. cambridge university press, 2009. Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, and Nikunj Saunshi. A theoretical analysis of contrastive unsupervised representation learning. ar Xiv preprint ar Xiv:1902.09229, 2019. Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. ar Xiv preprint ar Xiv:2006.11477, 2020. Published as a conference paper at ICLR 2021 Peter L Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE transactions on Information Theory, 44(2):525 536, 1998. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm. Mine: mutual information neural estimation. ar Xiv preprint ar Xiv:1801.04062, 2018. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. ar Xiv preprint ar Xiv:2006.09882, 2020. Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In Proceedings of the 37th International Conference on Machine Learning, 2020a. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. ar Xiv preprint ar Xiv:2002.05709, 2020b. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. Big selfsupervised models are strong semi-supervised learners. ar Xiv preprint ar Xiv:2006.10029, 2020c. Ching-Yao Chuang, Joshua Robinson, Lin Yen-Chen, Antonio Torralba, and Stefanie Jegelka. Debiased contrastive learning. ar Xiv preprint ar Xiv:2007.00224, 2020. Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 215 223, 2011. Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. ar Xiv preprint ar Xiv:1810.04805, 2018. Monroe D Donsker and SR Srinivasa Varadhan. Asymptotic evaluation of certain markov process expectations for large time, i. Communications on Pure and Applied Mathematics, 28(1):1 47, 1975. Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. ar Xiv preprint ar Xiv:1803.07728, 2018. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767 5777, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770 778, 2016. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729 9738, 2020. R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. ar Xiv preprint ar Xiv:1808.06670, 2018. Sepp Hochreiter and J urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735 1780, 1997. K Hornik, M Stinchcombe, and H White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359 366, 1989. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ar Xiv preprint ar Xiv:1502.03167, 2015. Published as a conference paper at ICLR 2021 Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. ar Xiv preprint ar Xiv:2004.11362, 2020. Thomas Kipf, Elise van der Pol, and Max Welling. Contrastive learning of structured world models. ar Xiv preprint ar Xiv:1911.12247, 2019. Lingpeng Kong, Cyprien de Masson d Autume, Wang Ling, Lei Yu, Zihang Dai, and Dani Yogatama. A mutual information maximization perspective of language representation learning. ar Xiv preprint ar Xiv:1910.08350, 2019. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332 1338, 2015. Lillian Lee. Measures of distributional similarity. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pp. 25 32, College Park, Maryland, USA, June 1999. Association for Computational Linguistics. doi: 10.3115/1034678.1034693. URL https: //www.aclweb.org/anthology/P99-1004. Lillian Lee. On the effectiveness of the skew divergence for statistical language analysis. In AISTATS. Citeseer, 2001. Xiang Li, Wenhai Wang, Xiaolin Hu, and Jian Yang. Selective kernel networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 510 519, 2019. Xiao Liu, Fanjin Zhang, Zhenyu Hou, Zhaoyu Wang, Li Mian, Jing Zhang, and Jie Tang. Selfsupervised learning: Generative or contrastive. ar Xiv e-prints, pp. ar Xiv 2006, 2020. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730 3738, 2015. Sindy L owe, Peter O Connor, and Bastiaan Veeling. Putting an end to end-to-end: Gradient-isolated learning of representations. In Advances in Neural Information Processing Systems, pp. 3039 3051, 2019. David Mc Allester and Karl Stratos. Formal limitations on the measurement of mutual information. In International Conference on Artificial Intelligence and Statistics, pp. 875 884, 2020. Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In European Conference on Computer Vision, pp. 527 544. Springer, 2016. Xuan Long Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847 5861, 2010. Frank Nielsen. A family of statistical symmetric divergences based on jensen s inequality. ar Xiv preprint ar Xiv:1009.4004, 2010. Frank Nielsen and Richard Nock. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Processing Letters, 21(1):10 13, 2013. Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pp. 69 84. Springer, 2016. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in neural information processing systems, pp. 271 279, 2016. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. ar Xiv preprint ar Xiv:1807.03748, 2018. Published as a conference paper at ICLR 2021 Sherjil Ozair, Corey Lynch, Yoshua Bengio, Aaron Van den Oord, Sergey Levine, and Pierre Sermanet. Wasserstein dependency measure for representation learning. In Advances in Neural Information Processing Systems, pp. 15604 15614, 2019. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206 5210. IEEE, 2015. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, highperformance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024 8035, 2019. Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexander A Alemi, and George Tucker. On variational bounds of mutual information. ar Xiv preprint ar Xiv:1905.06922, 2019. Morgane Rivi ere, Armand Joulin, Pierre-Emmanuel Mazar e, and Emmanuel Dupoux. Unsupervised pretraining transfers well across languages. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7414 7418. IEEE, 2020. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211 252, 2015. Jiaming Song and Stefano Ermon. Understanding the limitations of variational mutual information estimators. ar Xiv preprint ar Xiv:1910.06222, 2019. Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. ar Xiv preprint ar Xiv:1906.05849, 2019. Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good views for contrastive learning. ar Xiv preprint ar Xiv:2005.10243, 2020. Christopher Tosh, Akshay Krishnamurthy, and Daniel Hsu. Contrastive learning, multi-view redundancy, and linear models. ar Xiv preprint ar Xiv:2008.10150, 2020. Yao-Hung Hubert Tsai, Yue Wu, Ruslan Salakhutdinov, and Louis-Philippe Morency. Demystifying self-supervised learning: An information-theoretical framework. ar Xiv preprint ar Xiv:2006.05576, 2020a. Yao-Hung Hubert Tsai, Han Zhao, Makoto Yamada, Louis-Philippe Morency, and Ruslan Salakhutdinov. Neural methods for point-wise dependency estimation. ar Xiv preprint ar Xiv:2006.05553, 2020b. Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. ar Xiv preprint ar Xiv:1907.13625, 2019. Aad W Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998 6008, 2017. Petar Velickovic, William Fedus, William L Hamilton, Pietro Li o, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. In ICLR (Poster), 2019. Makoto Yamada, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, and Masashi Sugiyama. Relative density-ratio estimation for robust distribution comparison. Neural computation, 25(5): 1324 1370, 2013. Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. ar Xiv preprint ar Xiv:1708.03888, 2017. Published as a conference paper at ICLR 2021 A.1 PROOF OF LEMMA 1 IN THE MAIN TEXT Lemma 2 (Optimal Solution for JRPC, restating Lemma 1 in the main text) Let JRPC(X, Y ) := sup f F EPXY [f(x, y)] αEPXPY [f(x, y)] β 2 EPXY f 2(x, y) γ 2 EPXPY f 2(x, y) and r(x, y) = p(x,y) p(x)p(y) be the density ratio. JRPC has the optimal solution f (x, y) = r(x, y) α β r(x, y) + γ := rα,β,γ(x, y) with α Proof: The second-order functional derivative of the objective is βd PX,Y γd PXPY , which is always negative. The negative second-order functional derivative implies the objective has a supreme value. Then, take the first-order functional derivative JRPC m and set it to zero: d PX,Y α d PXPY β f(x, y) d PX,Y γ f(x, y) d PXPY = 0. We then get f (x, y) = d PX,Y α d PXPY β d PX,Y + γ d PXPY = p(x, y) αp(x)p(y) βp(x, y) + γp(x)p(y) = r(x, y) α βr(x, y) + γ . Since 0 r(x, y) , we have α γ r(x,y) α βr(x,y)+γ 1 β = 0, γ = 0, f (x, y) := rα,β,γ(x, y) with α A.2 RELATION BETWEEN JRPC AND Dχ2 In this subsection, we aim to show the following: 1) Dχ2(PXY PXPY ) = EPXPY [r2(x, y)] 1; and 2) JRPC(X, Y ) = β+γ 2 EP [r2 α,β,γ(x, y)] by having P = β β+γ PXY + γ β+γ PXPY as the mixture distribution of PXY and PXPY . Lemma 3 Dχ2(PXY PXPY ) = EPXPY [r2(x, y)] 1 Proof: By definition (Nielsen & Nock, 2013), Dχ2(PXY PXPY ) = Z d PXY 2 d PXPY 1 = Z d PXY = Z p(x, y) 2 d PXPY 1 = Z r2(x, y)d PXPY 1 = EPXPY [r2(x, y)] 1. Lemma 4 Defining P = β β+γ PXY + γ β+γ PXPY as a mixture distribution of PXY and PXPY , JRPC(X, Y ) = β+γ 2 EP [r2 α,β,γ(x, y)]. Published as a conference paper at ICLR 2021 Proof: Plug in the optimal solution f (x, y) = d PX,Y α d PXPY β d PX,Y +γ d PXPY (see Lemma 2) into JRPC: JRPC = EPXY [f (x, y)] αEPXPY [f (x, y)] β 2 EPXY h f 2(x, y) i γ 2 EPXPY h f 2(x, y) i = Z f (x, y) d PXY α d PXPY 1 2f 2(x, y) β d PXY + γ d PXPY = Z d PX,Y α d PXPY β d PX,Y + γ d PXPY d PXY α d PXPY 1 d PX,Y α d PXPY β d PX,Y + γ d PXPY 2 β d PXY + γ d PXPY Z d PX,Y α d PXPY β d PX,Y + γ d PXPY 2 β d PXY + γ d PXPY Z d PX,Y α d PXPY β d PX,Y + γ d PXPY 2 β β + γ d PXY + γ β + γ d PXPY . Since we define rα,β,γ = d PX,Y α d PXPY β d PX,Y +γ d PXPY and P = β β+γ PXY + γ β+γ PXPY , JRPC = β + γ 2 EP [r2 α,β,γ(x, y)]. A.3 PROOF OF PROPOSITION 1 IN THE MAIN TEXT The proof contains two parts: showing 0 JRPC 1 2β + α2 2γ (see Section A.3.1) and ˆJm,n RPC is a consistent estimator for JRPC (see Section A.3.2). A.3.1 BOUNDNESS OF JRPC Lemma 5 (Boundness of JRPC) 0 JRPC 1 2β + α2 Proof: Lemma 4 suggests JRPC(X, Y ) = β+γ 2 EP [r2 α,β,γ(x, y)] with P = β β+γ PXY + γ β+γ PXPY as the mixture distribution of PXY and PXPY . Hence, it is obvious JRPC(X, Y ) 0. We leverage the intermediate results in the proof of Lemma 4: JRPC(X, Y ) = 1 Z d PX,Y α d PXPY β d PX,Y + γ d PXPY 2 β d PXY + γ d PXPY Z d PX,Y d PX,Y α d PXPY β d PX,Y + γ d PXPY Z d PXPY d PX,Y α d PXPY β d PX,Y + γ d PXPY 2EPXY [rα,β,γ(x, y)] α 2 EPXPY [rα,β,γ(x, y)]. β , JRPC(X, Y ) 1 2β + α2 A.3.2 CONSISTENCY We first recall the definition of the estimation of JRPC: Definition 2 ( ˆJm,n RPC, empirical estimation of JRPC, restating Definition 1 in the main text) We parametrize f via a family of neural networks FΘ := {fθ : θ Θ Rd} where d N and Θ is compact. Let {xi, yi}n i=1 be n samples drawn uniformly at random from PXY and {x j, y j}m j=1 be m samples drawn uniformly at random from PXPY . Then, ˆJm,n RPC = sup fθ FΘ i=1 fθ(xi, yi) 1 j=1 αfθ(x j, y j) 1 2 f 2 θ (xi, yi) 1 γ 2 f 2 θ (x j, y j). Published as a conference paper at ICLR 2021 Our goal is to show that ˆJm,n RPC is a consistent estimator for JRPC. We begin with the following definition: ˆJm,n RPC,θ := 1 i=1 fθ(xi, yi) 1 j=1 αfθ(x j, y j) 1 2 f 2 θ (xi, yi) 1 γ 2 f 2 θ (x j, y j) (3) E h ˆJRPC,θ i := EPXY [fθ(x, y)] αEPXPY [fθ(x, y)] β 2 EPXY [f 2 θ (x, y)] γ 2 EPXPY [f 2 θ (x, y)]. (4) Then, we follow the steps: The first part is about estimation. We show that, with high probability, ˆJm,n RPC,θ is close to E h ˆJRPC,θ i , for any given θ. The second part is about approximation. We will apply the universal approximation lemma of neural networks (Hornik et al., 1989) to show that there exists a network θ such that E h ˆJRPC,θ i is close to JRPC. Part I - Estimation: With high probability, ˆJm,n RPC,θ is close to E h ˆJRPC,θ i , for any given θ. Throughout the analysis on the uniform convergence, we need the assumptions on the boundness and smoothness of the function fθ. Since we show the optimal function f is bounded in JRPC, we can use the same bounded values for fθ without losing too much precision. The smoothness of the function suggests that the output of the network should only change slightly when only slightly perturbing the parameters. Specifically, the two assumptions are as follows: Assumption 1 (boundness of fθ) There exist universal constants such that fθ FΘ, CL fθ CU. For notations simplicity, we let M = CU CL be the range of fθ and U = max {|CU|, |CL|} be the maximal absolute value of fθ. In the paper, we can choose to constrain that CL = α γ and CU = 1 β since the optimal function f has α Assumption 2 (smoothness of fθ) There exists constant ρ > 0 such that (x, y) (X Y) and θ1, θ2 Θ, |fθ1(x, y) fθ2(x, y)| ρ|θ1 θ2|. Now, we can bound the rate of uniform convergence of a function class in terms of covering number (Bartlett, 1998): Lemma 6 (Estimation) Let ϵ > 0 and N(Θ, ϵ) be the covering number of Θ with radius ϵ. Then, ˆJm,n RPC,θ E h ˆJRPC,θ i ϵ 2N(Θ, ϵ 4ρ 1 + α + 2(β + γ)U ) Proof: For notation simplicity, we define the operators P(f) = EPXY [f(x, y)] and Pn(f) = 1 n Pn i=1 f(xi, yi) Q(f) = EPXPY [f(x, y)] and Qm(f) = 1 m Pm j=1 f(x j, y j) Hence, ˆJm,n RPC,θ E h ˆJRPC,θ i = Pn(fθ) P(fθ) αQm(fθ) + αQ(fθ) βPn(f 2 θ ) + βP(f 2 θ ) γQm(f 2 θ ) + γQ(f 2 θ ) |Pn(fθ) P(fθ)| + α |Qm(fθ) Q(fθ)| + β Pn(f 2 θ ) P(f 2 θ ) + γ Qm(f 2 θ ) Q(f 2 θ ) Published as a conference paper at ICLR 2021 4ρ 1+α+2(β+γ)U and T := N(Θ, ϵ ). Let C = {fθ1, fθ2, , fθT } with {θ1, θ2, , θT } be such that B (θ1, ϵ ), , B (θT , ϵ ) are ϵ cover. Hence, for any fθ FΘ, there is an fθk C such that θ θk ϵ . Then, for any fθk C: ˆJm,n RPC,θ E h ˆJRPC,θ i |Pn(fθ) P(fθ)| + α |Qm(fθ) Q(fθ)| + β Pn(f 2 θ ) P(f 2 θ ) + γ Qm(f 2 θ ) Q(f 2 θ ) |Pn(fθk) P(fθk)| + |Pn(fθ) Pn(fθk)| + |P(fθ) P(fθk)| + α |Qm(fθk) Q(fθk)| + |Qm(fθ) Qm(fθk)| + |Q(fθ) Q(fθk)| + β Pn(f 2 θk) P(f 2 θk) + Pn(f 2 θ ) Pn(f 2 θk) + P(f 2 θ ) P(f 2 θk) + γ Qm(f 2 θk) Q(f 2 θk) + Qm(f 2 θ ) Qm(f 2 θk) + Q(f 2 θ ) Q(f 2 θk) |Pn(fθk) P(fθk)| + ρ θ θk + ρ θ θk + α |Qm(fθk) Q(fθk)| + ρ θ θk + ρ θ θk + β Pn(f 2 θk) P(f 2 θk) + 2ρU θ θk + 2ρU θ θk + γ Qm(f 2 θk) Q(f 2 θk) + 2ρU θ θk + 2ρU θ θk = |Pn(fθk) P(fθk)| + α |Qm(fθk) Q(fθk)| + β Pn(f 2 θk) P(f 2 θk) + γ Qm(f 2 θk) Q(f 2 θk) + 2ρ 1 + α + 2(β + γ)U θ θk |Pn(fθk) P(fθk)| + α |Qm(fθk) Q(fθk)| + β Pn(f 2 θk) P(f 2 θk) + γ Qm(f 2 θk) Q(f 2 θk) + ϵ |Pn(fθ) Pn(fθk)| ρ θ θk due to Assumption 2, and the result also applies for |P(fθ) P(fθk)|, |Qm(fθ) Qm(fθk)|, and |Q(fθ) Q(fθk)|. Pn(f 2 θ ) Pn(f 2 θk) 2 fθ ρ θ θk 2ρU θ θk due to Assumptions 1 and 2. The result also applies for P(f 2 θ ) P(f 2 θk) , Qm(f 2 θ ) Qm(f 2 θk) , and Q(f 2 θ ) Q(f 2 θk) . ˆJm,n RPC,θ E h ˆJRPC,θ i ϵ Pr max fθk C |Pn(fθk) P(fθk)| + α |Qm(fθk) Q(fθk)| + β Pn(f 2 θk) P(f 2 θk) + γ Qm(f 2 θk) Q(f 2 θk) + ϵ = Pr max fθk C |Pn(fθk) P(fθk)| + α |Qm(fθk) Q(fθk)| + β Pn(f 2 θk) P(f 2 θk) + γ Qm(f 2 θk) Q(f 2 θk) ϵ k=1 Pr |Pn(fθk) P(fθk)| + α |Qm(fθk) Q(fθk)| + β Pn(f 2 θk) P(f 2 θk) + γ Qm(f 2 θk) Q(f 2 θk) ϵ k=1 Pr |Pn(fθk) P(fθk)| ϵ + Pr α |Qm(fθk) Q(fθk)| ϵ + Pr β Pn(f 2 θk) P(f 2 θk) ϵ + Pr γ Qm(f 2 θk) Q(f 2 θk) ϵ With Hoeffding s inequality, Published as a conference paper at ICLR 2021 Pr |Pn(fθk) P(fθk)| ϵ 8 2exp nϵ2 32M 2 Pr α |Qm(fθk) Q(fθk)| ϵ 8 2exp mϵ2 32M 2α2 Pr β Pn(f 2 θk) P(f 2 θk) ϵ 8 2exp nϵ2 32U 2β2 Pr γ Qm(f 2 θk) Q(f 2 θk) ϵ 8 2exp mϵ2 32U 2γ2 To conclude, ˆJm,n RPC,θ E h ˆJRPC,θ i ϵ 2N(Θ, ϵ 4ρ 1 + α + 2(β + γ)U ) Part II - Approximation: Neural Network Universal Approximation. We leverage the universal function approximation lemma of neural network Lemma 7 (Approximation (Hornik et al., 1989)) Let ϵ > 0. There exists d N and a family of neural networks FΘ := {fθ : θ Θ Rd} where Θ is compact, such that E h ˆJRPC,θ i JRPC ϵ. Part III - Bringing everything together. Now, we are ready to bring the estimation and approximation together to show that there exists a neural network θ such that, with high probability, ˆJm,n RPC,θ can approximate JRPC with n = min {n, m} at a rate of O(1/ Proposition 3 With probability at least 1 δ, θ Θ, |JRPC ˆJm,n RPC,θ| = O( q d+log (1/δ) n ), where n = min {n, m}. Proof: The proof follows by combining Lemma 6 and 7. First, Lemma 7 suggests, θ Θ, E h ˆJRPC,θ i JRPC ϵ Next, we perform analysis on the estimation error, aiming to find n, m and the corresponding probability, such that ˆJm,n RPC,θ E h ˆJRPC,θ i ϵ Applying Lemma 6 with the covering number of the neural network: N(Θ, ϵ) = O exp d log (1/ϵ) (Anthony & Bartlett, 2009) and let n = min{n, m}: ˆJm,n RPC,θ E h ˆJRPC,θ i ϵ 2N(Θ, ϵ 8ρ 1 + α + 2(β + γ)U ) 128M 2 + exp mϵ2 128M 2α2 + exp nϵ2 128U 2β2 + exp mϵ2 =O exp d log (1/ϵ) n ϵ2 , Published as a conference paper at ICLR 2021 where the big-O notation absorbs all the constants that do not require in the following derivation. Since we want to bound the probability with 1 δ, we solve the ϵ such that exp d log (1/ϵ) n ϵ2 δ. With log (x) x 1, n ϵ2 + d(ϵ 1) n ϵ2 + dlog ϵ log (1/δ), where this inequality holds when d + log (1/δ) A.4 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM AN ASYMPTOTIC VIEWPOINT Here, we provide the variance analysis on ˆJm,n RPC via an asymptotic viewpoint. First, assuming the network is correctly specified, and hence there exists a network parameter θ satisfying f (x, y) = fθ (x, y) = rα,β,γ(x, y). Then we recall that ˆJm,n RPC is a consistent estimator of JRPC (see Proposition 3), and under regular conditions, the estimated network parameter ˆθ in ˆJm,n RPC satisfying the asymptotic normality in the large sample limit (see Theorem 5.23 in (Van der Vaart, 2000)). We recall the definition of ˆJm,n RPC,θ in equation 3 and let n = min{n, m}, the asymptotic expansion of ˆJm,n RPC has ˆJm,n RPC,θ = ˆJm,n RPC,ˆθ + ˆJm,n RPC,ˆθ(θ ˆθ) + o( θ ˆθ ) = ˆJm,n RPC,ˆθ + ˆJm,n RPC,ˆθ(θ ˆθ) + op( 1 = ˆJm,n RPC,ˆθ + op( 1 where ˆJm,n RPC,ˆθ = 0 since ˆθ is the estimation from ˆJm,n RPC = sup fθ FΘ ˆJm,n RPC,θ. Next, we recall the definition in equation 4: E[ ˆJRPC,ˆθ] = EPXY [fˆθ(x, y)] αEPXPY [fˆθ(x, y)] β 2 EPXY [f 2 ˆθ (x, y)] γ 2 EPXPY [f 2 ˆθ (x, y)]. Likewise, the asymptotic expansion of E[ ˆJRPC,θ] has E[ ˆJRPC,ˆθ] = E[ ˆJRPC,θ ] + E[ ˆJRPC,θ ](ˆθ θ ) + o( ˆθ θ ) = E[ ˆJRPC,θ ] + E[ ˆJRPC,θ ](ˆθ θ ) + op( 1 = E[ ˆJRPC,θ ] + op( 1 where E[ ˆJRPC,θ ] = 0 since E[ ˆJRPC,θ ] = JRPC and θ satisfying f (x, y) = fθ (x, y). Published as a conference paper at ICLR 2021 Combining equations 5 and 6: ˆJm,n RPC,ˆθ E[ ˆJRPC,ˆθ] = ˆJm,n RPC,θ JRPC + op( 1 i=1 f θ (xi, yi) α 1 j=1 f θ (x j, y j) β i=1 f 2 θ (xi, yi) γ j=1 f 2 θ (x j, y j) EPXY [f (x, y)] + αEPXPY [f (x, y)] + β 2 EPXY h f 2(x, y) i + γ 2 EPXPY h f 2(x, y) i + op( 1 i=1 rα,β,γ(xi, yi) α 1 j=1 rα,β,γ(x j, y j) β i=1 r2 α,β,γ(xi, yi) γ j=1 r2 α,β,γ(x j, y j) EPXY [rα,β,γ(x, y)] + αEPXPY [rα,β,γ(x, y)] + β 2 EPXY r2 α,β,γ(x, y) + γ 2 EPXPY r2 α,β,γ(x, y) rα,β,γ(xi, yi) β 2 r2 α,β,γ(xi, yi) EPXY rα,β,γ(x, y) β 2 r2 α,β,γ(x, y) ! αrα,β,γ(x j, y j) + γ 2 r2 α,β,γ(x j, y j) EPXPY αrα,β,γ(x, y) + γ 2 r2 α,β,γ(x, y) ! Therefore, the asymptotic Variance of ˆJm,n RPC is Var[ ˆJm,n RPC] = 1 n Var PXY [rα,β,γ(x, y) β 2 r2 α,β,γ(x, y)] + 1 m Var PXPY [αrα,β,γ(x, y) + γ 2 r2 α,β,γ(x, y)] + o( 1 First, we look at Var PXY [rα,β,γ(x, y) β 2 r2 α,β,γ(x, y)]. Since β > 0 and α calculation gives us 2αγ+βα2 2γ2 rα,β,γ(x, y) β 2 r2 α,β,γ(x, y) 1 2β . Hence, Var PXY [rα,β,γ(x, y) β 2 r2 α,β,γ(x, y)] max 2αγ + βα2 Next, we look at Var PXPY [αrα,β,γ(x, y)+ γ 2 r2 α,β,γ(x, y)]. Since α 0, γ > 0 and α 1 β , simple calculation gives us α2 2γ αrα,β,γ(x, y) + γ 2 r2 α,β,γ(x, y) 2αβ+γ 2β2 . Hence, Var PXPY [αrα,β,γ(x, y) + γ 2 r2 α,β,γ(x, y)] max α2 2 , 2αβ + γ Combining everything together, we restate the Proposition 2 in the main text: Proposition 4 (Asymptotic Variance of ˆJm,n RPC) Var[ ˆJm,n RPC] = 1 n Var PXY [rα,β,γ(x, y) β 2 r2 α,β,γ(x, y)] + 1 m Var PXPY [αrα,β,γ(x, y) + γ 2 r2 α,β,γ(x, y)] + o( 1 nmax 2αγ + βα2 2 , 2αβ + γ Published as a conference paper at ICLR 2021 A.5 PROOF OF PROPOSITION 2 IN THE MAIN TEXT - FROM BOUNDNESS OF fθ As discussed in Assumption 1, for the estimation ˆJm,n RPC, we can bound the function fθ in FΘ within [ α β ] without losing precision. Then, re-arranging ˆJm,n RPC: i=1 fθ(xi, yi) 1 j=1 αfθ(x j, y j) 1 2 f 2 θ (xi, yi) 1 γ 2 f 2 θ (x j, y j) fθ(xi, yi) β 2 f 2 θ (xi, yi) + 1 αfθ(x j, y j) + γ 2 f 2 θ (x j, y j) Then, since α γ fθ( , ) 1 β , basic calculations give us 2γ2 fθ(xi, yi) β 2 f 2 θ (xi, yi) 1 2γ αfθ(x j, y j)+γ 2 f 2 θ (x j, y j) 2αβ + γ The resulting variances have Var[fθ(xi, yi) β 2 f 2 θ (xi, yi)] max 2αγ + βα2 Var[αfθ(x j, y j) + γ 2 f 2 θ (x j, y j)] max α2 2 , 2αβ + γ Taking the mean of m, n independent random variables gives the result: Proposition 5 (Variance of ˆJm,n RPC) Var[ ˆJm,n RPC] 1 nmax 2αγ + βα2 2 , 2αβ + γ A.6 IMPLEMENTATION OF EXPERIMENTS For visual representation learning, we follow the implementation in https://github.com/ google-research/simclr. For speech representation learning, we follow the implementation in https://github.com/facebookresearch/CPC_audio. For MI estimation, we follow the implementation in https://github.com/yaohungt/Pointwise_ Dependency_Neural_Estimation/tree/master/MI_Est_and_Cross Modal.. A.7 RELATIVE PREDICTIVE CODING ON VISION The whole pipeline of pretraining contains the following steps: First, a stochastic data augmentation will transform one image sample xk to two different but correlated augmented views, x 2k 1 and x 2k. Then a base encoder f( ) implemented using Res Net (He et al., 2016) will extract representations from augmented views, creating representations h2k 1 and h2k. Later a small neural network g( ) called projection head will map h2k 1 and h2k to z2k 1 and z2k in a different latent space. For each minibatch of N samples, there will be 2N views generated. For each image xk there will be one positive pair x 2k 1 and x 2k and 2(N 1) negative samples. The RPC loss between a pair of positive views, x i and x j (augmented from the same image) , can be calculated by the substitution fθ(x i, x j) = (zi zj)/τ = si,j (τ is a hyperparameter) to the definition of RPC: ℓRPC i,j = (si,j α 2(N 1) k=1 1[k =i]si,k β 2 s2 i,j γ 2 2(N 1) k=1 1[k =i]s2 i,k) (7) For losses other than RPC, a hidden normalization of si,j is often required by replacing zi zj with (zi zj)/|zi||zj|. CPC and WPC adopt this, while other objectives needs it to help stabilize training variance. RPC does not need this normalization. Published as a conference paper at ICLR 2021 Confidence Interval of JRPC and JCPC Objective CIFAR 10 CIFAR 100 Image Net JCPC (91.09%, 91.13%) (77.11%, 77.36%) (73.39%, 73.48%) JRPC (91.16%, 91.47%) (77.41%, 77.98%) (73.92%, 74.43%) Table 4: Confidence Intervals of performances of JRPC and JCPC on CIFAR-10/-100 and Image Net. A.8 CIFAR-10/-100 AND IMAGENET EXPERIMENTS DETAILS Image Net Following the settings in (Chen et al., 2020b;c), we train the model on Cloud TPU with 128 cores, with a batch size of 4, 096 and global batch normalization 3 (Ioffe & Szegedy, 2015). Here we refer to the term batch size as the number of images (or utterances in the speech experiments) we use per GPU, while the term minibatch size refers to the number of negative samples used to calculate the objective, such as CPC or our proposed RPC. The largest model we train is a 152-layer Res Net with selective kernels (SK) (Li et al., 2019) and 2 wider channels. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10 4. A MLP projection head g( ) with three layers is used on top of the Res Net encoder. Unlike Chen et al. (2020c), we do not use a memory buffer, and train the model for only 100 epochs rather than 800 epochs due to computational constraints. These two options slightly reduce CPC s performance benchmark for about 2% with the exact same setting. The unsupervised pre-training is followed by a supervised fine-tuning. Following Sim CLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g( ) for the downstream tasks. We use learning rates 0.16 and 0.064 for standard 50-layer Res Net and larger 152-layer Res Net respectively, and weight decay and learning rate warmup are removed. Different from Chen et al. (2020c), we use a batch size of 4, 096, and we do not use global batch normalization for fine-tuning. For JRPC we disable hidden normalization and use a temperature τ = 32. For all other objectives, we use hidden normalization and τ = 0.1 following previous work (Chen et al., 2020c). For relative parameters, we use α = 0.3, β = 0.001, γ = 0.1 and α = 0.3, β = 0.001, γ = 0.005 for Res Net-50 and Res Net-152 respectively. CIFAR-10/-100 Following the settings in (Chen et al., 2020b), we train the model on a single GPU, with a batch size of 512 and global batch normalization (Ioffe & Szegedy, 2015). We use Res Net (He et al., 2016) of depth 18 and depth 50, and does not use Selective Kernel (Li et al., 2019) or a multiplied width size. We use the LARS optimizer (You et al., 2017) with momentum 0.9. The learning rate linearly increases for the first 20 epochs, reaching a maximum of 6.4, then decayed with cosine decay schedule. The weight decay is 10 4. A MLP projection head g( ) with three layers is used on top of the Res Net encoder. Unlike Chen et al. (2020c), we do not use a memory buffer. We train the model for 1000 epochs. The unsupervised pre-training is followed by a supervised fine-tuning. Following Sim CLRv2 (Chen et al., 2020b;c), we fine-tune the 3-layer g( ) for the downstream tasks. We use learning rates 0.16 for standard 50-layer Res Net , and weight decay and learning rate warmup are removed. For JRPC we disable hidden normalization and use a temperature τ = 128. For all other objectives, we use hidden normalization and τ = 0.5 following previous work (Chen et al., 2020c). For relative parameters, we use α = 1.0, β = 0.005, and γ = 1.0. STL-10 We also perform the pre-training and fine-tuning on STL-10 (Coates et al., 2011) using the model proposed in Chuang et al. (2020). Chuang et al. (2020) proposed to indirectly approximate the distribution of negative samples so that the objective is debiased. However, their implementation of contrastive learning is consistent with Chen et al. (2020b). We use a Res Net with depth 50 as an encoder for pre-training, with Adam optimizer, learning rate 0.001 and weight decay 10 6. The temperature τ is set to 0.5 for all objectives other than JRPC, which disables hidden normalization and use τ = 128. The downstream task performance increases from 83.4% of JCPC to 84.1% of JRPC. Confidence Interval We also provide the confidence interval of JRPC and JCPC on CIFAR-10, CIFAR-100 and Image Net, using Res Net-18, Res Net-18 and Res Net-50 respectively (95% confi- 3For WPC (Ozair et al., 2019), the global batch normalization during pretraining is disabled since we enforce 1-Lipschitz by gradient penalty (Gulrajani et al., 2017). Published as a conference paper at ICLR 2021 dence level is chosen) in Table 4. Both CPC and RPC use the same experimental settings throughout this paper. Here we use the relative parameters (α = 1.0, β = 0.005, γ = 1.0) in JRPC which gives the best performance on CIFAR-10. The confidence intervals of CPC do not overlap with the confidence intervals of RPC, which means the difference of the downstream task performance between RPC and CPC is statistically significant. A.9 RELATIVE PREDICTIVE CODING ON SPEECH For speech representation learning, we adopt the general architecture from Oord et al. (2018). Given an input signal x1:T with T time steps, we first pass it through an encoder φθ parametrized by θ to produce a sequence of hidden representations {h1:T } where ht = φθ(xt). After that, we obtain the contextual representation ct at time step t with a sequential model ψρ parametrized by ρ: ct = ψρ(h1, . . . , ht), where ct contains context information before time step t. For unsupervised pre-training, we use a multi-layer convolutional network as the encoder φθ, and an LSTM with hidden dimension 256 as the sequential model ψρ. Here, the contrastiveness is between the positive pair (ht+k, ct) where k is the number of time steps ahead, and the negative pairs (hi, ct), where hi is randomly sampled from N, a batch of hidden representation of signals assumed to be unrelated to ct. The scoring function f based on Equation 2 at step t and look-ahead k will be fk = fk(h, ct) = exp((h) Wkct), where Wk is a learnable linear transformation defined separately for each k {1, ..., K} and K is predetermined as 12 time steps. The loss in Equation 2 will then be formulated as: ℓRPC t,k = (fk(ht+k, ct) α hi N fk(hi, ct) β 2 f 2 k(ht+k, ct) γ 2|N| hi N f 2 k(hi, ct)) (8) We use the following relative parameters: α = 1, β = 0.25, and γ = 1, and we use the temperature τ = 16 for JRPC. For JCPC we follow the original implementation which sets τ = 1. We fix all other experimental setups, including architecture, learning rate, and optimizer. As shown in Table 3, JRPC has better downstream task performance, and is closer to the performance from a fully supervised model. A.10 EMPIRICAL OBSERVATIONS ON VARIANCE AND MINIBATCH SIZE Variance Experiment Setup We perform the variance comparison of JDV, JNWJ and the proposed JRPC. The empirical experiments are performed using Sim CLRv2 (Chen et al., 2020c) on CIFAR-10 dataset. We use a Res Net of depth 18, with batch size of 512. We train each objective with 30K training steps and record their value. In Figure 1, we use a temperature τ = 128 for all objectives. Unlike other experiments, where hidden normalization is applied to other objectives, we remove hidden normarlization for all objectives due to the reality that objectives after normalization does not reflect their original values. From Figure 1, JRPC enjoys lower variance and more stable training compared to JDV and JNWJ. Minibatch Size Experimental Setup We perform experiments on the effect of batch size on downstream performances for different objective. The experiments are performed using Sim CLRv2 (Chen et al., 2020c) on CIFAR-10 dataset, as well as the model from Rivi ere et al. (2020) on Libri Speech-100h dataset (Panayotov et al., 2015). For vision task, we use the default temperature τ = 0.5 from Chen et al. (2020c) and hidden normalization mentioned in Section 3 for JCPC. For JRPC in vision and speech tasks we use a temperature of τ = 128 and τ = 16 respectively, both without hidden normalization. A.11 MUTUAL INFORMATION ESTIMATION Our method is compared with baseline methods CPC (Oord et al., 2018), NWJ (Nguyen et al., 2010), JSD (Nowozin et al., 2016), and SMILE (Song & Ermon, 2019). All the approaches consider the same design of f(x, y), which is a 3-layer neural network taking concatenated (x, y) as the input. We also fix the learning rate, the optimizer, and the minibatch size across all the estimators for a fair comparison. We present results of mutual information by Relative Predictive Coding using different sets of relative parameters in Figure 4. In the first row, we set β = 10 3, γ = 1, and experiment with different Published as a conference paper at ICLR 2021 Figure 4: Mutual information estimation by RPC performed on 20-d correlated Gaussian distribution, with different sets of relative parameters. Figure 5: Mutual information estimation by Do E performed on 20-d correlated Gaussian distribution. The figure on the left shows parametrization under Gaussian (correctly specified), and the figure on the right shows parametrization under Logistic (mis-specified). α values. In the second row, we set α = 1, γ = 1 and in the last row we set α = 1, β = 10 3. From the figure, a small β around 10 3 and a large γ around 1.0 is crucial for an estimation that is relatively low bias and low variance. This conclusion is consistent with Section 3 in the main text. We also performed comparison between JRPC and Difference of Entropies (Do E) (Mc Allester & Stratos, 2020). We performed two sets of experiments: in the first set of experiments we compare JRPC and Do E when MI is large (> 100 nats), while in the second set of experiments we compare JRPC and Do E using the setup in this section (MI < 12 nats and MI increases by 2 per 4k training steps). On the one hand, when MI is large (> 100 nats), we acknowledge that Do E is performing well on MI estimation, compared to JRPC which only estimates the MI around 20. This analysis is based on the code from https://github.com/karlstratos/doe. On the other hand, when the true MI is small, the Do E method is more unstable than JRPC, as shown in Figure 5. Figure 5 illustrates the results of the Do E method when the distribution is isotropic Gaussian (correctly specified) or Logistic (mis-specified). Figure 3 only shows the results using Gaussian.