# efficient_conditionally_invariant_representation_learning__8d601154.pdf Published as a conference paper at ICLR 2023 EFFICIENT CONDITIONALLY INVARIANT REPRESENTATION LEARNING Roman Pogodin Gatsby Unit, UCL rmn.pogodin@gmail.com Namrata Deka UBC dnamrata@cs.ubc.ca Yazhe Li Deep Mind & Gatsby Unit, UCL yazhe@google.com Danica J. Sutherland UBC & Amii dsuth@cs.ubc.ca Victor Veitch UChicago & Google Brain victorveitch@google.com Arthur Gretton Gatsby Unit, UCL arthur.gretton@gmail.com We introduce the Conditional Independence Regression Covarianc E (CIRCE), a measure of conditional independence for multivariate continuous-valued variables. CIRCE applies as a regularizer in settings where we wish to learn neural features φ(X) of data X to estimate a target Y , while being conditionally independent of a distractor Z given Y . Both Z and Y are assumed to be continuous-valued but relatively low dimensional, whereas X and its features may be complex and high dimensional. Relevant settings include domain-invariant learning, fairness, and causal learning. The procedure requires just a single ridge regression from Y to kernelized features of Z, which can be done in advance. It is then only necessary to enforce independence of φ(X) from residuals of this regression, which is possible with attractive estimation properties and consistency guarantees. By contrast, earlier measures of conditional feature dependence require multiple regressions for each step of feature learning, resulting in more severe bias and variance, and greater computational cost. When sufficiently rich features are used, we establish that CIRCE is zero if and only if φ(X) Z | Y . In experiments, we show superior performance to previous methods on challenging benchmarks, including learning conditionally invariant image features. 1 INTRODUCTION We consider a learning setting where we have labels Y that we would like to predict from features X, and we additionally observe some metadata Z that we would like our prediction to be invariant to. In particular, our aim is to learn a representation function φ for the features such that φ(X) Z | Y . There are at least three motivating settings where this task arises. 1. Fairness. In this context, Z is some protected attribute (e.g., race or sex) and the condition φ(X) Z | Y is the equalized odds condition (Mehrabi et al., 2021). 2. Domain invariant learning. In this case, Z is a label for the environment in which the data was collected (e.g., if we collect data from multiple hospitals, Zi labels the hospital that the ith datapoint is from). The condition φ(X) Z | Y is sometimes used as a target for invariant learning (e.g., Long et al., 2018; Tachet des Combes et al., 2020; Goel et al., 2021; Jiang & Veitch, 2022). Wang & Veitch (2022) argue that this condition is well-motivated in cases where Y causes X. 3. Causal representation learning. Neural networks may learn undesirable shortcuts for their tasks e.g., classifying images based on the texture of the background. To mitigate this issue, various schemes have been proposed to force the network to use causally relevant factors in its decision (e.g., Veitch et al., 2021; Makar et al., 2022; Puli et al., 2022). The structural causal assumptions used in such approaches imply conditional independence relationships between the features we would like the network to use, and observed metadata Equal contribution. Code for image data experiments is available at github.com/namratadeka/circe Published as a conference paper at ICLR 2023 that we may wish to be invariant to. These approaches then try to learn causally structured representations by enforcing this conditional independence in a learned representation. In this paper, we will be largely agnostic to the motivating application, instead concerning ourselves with how to learn a representation φ that satisfies the target condition. Our interest is in the (common) case where X is some high-dimensional structured data e.g., text, images, or video and we would like to model the relationship between X and (the relatively low-dimensional) Y, Z using a neural network representation φ(X). There are a number of existing techniques for learning conditionally invariant representations using neural networks (e.g., in all the motivating applications mentioned above). Usually, however, they rely on the labels Y being categorical with a small number of categories. We develop a method for conditionally invariant representation learning that is effective even when the labels Y and attributes Z are continuous or moderately high-dimensional. To understand the challenge, it is helpful to contrast with the task of learning a representation φ satisfying the marginal independence φ(X) Z. To accomplish this, we might define a neural network to predict Y in the usual manner, interpret the penultimate layer as the representation φ, and then add a regularization term that penalizes some measure of dependence between φ(X) and Z. As φ changes at each step, we d typically compute an estimate based on the samples in each mini-batch (e.g., Beutel et al., 2019; Veitch et al., 2021). The challenge for extending this procedure to conditional invariance is simply that it s considerably harder to measure. More precisely, as conditioning on Y splits the available data,1 we require large samples to assess conditional independence. When regularizing neural network training, however, we only have the samples available in each mini-batch: often not enough for a reliable estimate. The main contribution of this paper is a technique that reduces the problem of learning a conditionally independent representation to the problem of learning a marginally independent representation, following a characterization of conditional independence due to Daudin (1980). We first construct a particular statistic ζ(Y, Z) such that enforcing the marginal independence φ(X) ζ(Y, Z) is (approximately) equivalent to enforcing φ(X) Z | Y . The construction is straightforward: given a fixed feature map ψ(Y, Z) on Y Z (which may be a kernel or random Fourier feature map), we define ζ(Y, Z) as the conditionally centered features, ζ(Y, Z) = ψ(Y, Z) E[ψ(Y, Z) | Y ]. We obtain a measure of conditional independence, the Conditional Independence Regression Covarianc E (CIRCE), as the Hilbert-Schmidt Norm of the kernel covariance between φ(X) and ζ(Y, Z). A key point is that the conditional feature mean E[ψ(Y, Z) | Y ] can be estimated offline, in advance of any neural network training, using standard methods (Song et al., 2009; Grunewalder et al., 2012; Park & Muandet, 2020; Li et al., 2022). This makes CIRCE a suitable regularizer for any setting where the conditional independence relation φ(X) Z | Y should be enforced when learning φ(X). In particular, the learned relationship between Z and Y doesn t depend on the mini-batch size, sidestepping the tension between small mini-batches and the need for large samples to estimate conditional dependence. Moreover, when sufficiently expressive features (those corresponding to a characteristic kernel) are employed, then CIRCE is zero if and only if φ(X) Z | Y : this result may be of broader interest, for instance in causal structure learning Zhang et al. (2011) and hypothesis testing Fukumizu et al. (2008); Shah & Peters (2020); Huang et al. (2022). Our paper proceeds as follows: in Section 2, we introduce the relevant characterization of conditional independence from (Daudin, 1980), followed by our CIRCE criterion we establish that CIRCE is indeed a measure of conditional independence, and provide a consistent empirical estimate with finite sample guarantees. Next, in Section 3, we review alternative measures of conditional dependence. Finally, in Section 4, we demonstrate CIRCE in two practical settings: a series of counterfactual invariance benchmarks due to Quinzan et al. (2022), and image data extraction tasks on which a cheat variable is observed during training. 2 EFFICIENT CONDITIONAL INDEPENDENCE REGULARIZER We begin by providing a general-purpose characterization of conditional independence. We then introduce CIRCE, a conditional independence criterion based on this characterization, which is zero if and only if conditional independence holds (under certain required conditions). We provide a finite sample estimate with convergence guarantees, and strategies for efficient estimation from data. 1If Y is categorical, naively we would measure a marginal independence for each level of Y . Published as a conference paper at ICLR 2023 2.1 CONDITIONAL INDEPENDENCE We begin with a natural definition of conditional independence for real random variables: Definition 2.1 (Daudin, 1980). X and Z are Y -conditionally independent, X Z | Y , if for all test functions g L2 XY and h L2 ZY , i.e. for all square-integrable functions of (X, Y ) and (Z, Y ) respectively, we have almost surely in Y that EXZ [g(X, Y ) h(Z, Y ) | Y ] = EX [g(X, Y ) | Y ] EZ [h(Z, Y ) | Y ] . (1) The following classic result provides an equivalent formulation: Proposition 2.2 (Daudin, 1980). X and Z are Y -conditionally independent if and only if it holds for all test functions g E1 = g L2 XY | EX [g(X, Y ) | Y ] = 0 and h E2 = h L2 ZY | EZ [h(Z, Y ) | Y ] = 0 that E[g(X, Y ) h(Z, Y )] = 0. (2) Daudin (1980) notes that this condition can be further simplified (see Corollary A.3 for a proof): Proposition 2.3 (Equation 3.9 of Daudin 1980). X and Z are Y -conditionally independent if and only if it holds for all g L2 X and h E2 = h L2 ZY | EZ [h(Z, Y ) | Y ] = 0 that E[g(X) h(Z, Y )] = 0. (3) An equivalent way of writing this last condition (see Lemma B.1 for a formal proof) is: for all g L2 X and h L2 ZY , E h g(X) h(Z, Y ) EZ [h(Z , Y ) | Y ] i = 0. (4) The reduction to g not depending on Y is crucial for our method: when we are learning the representation φ(X), then evaluating the conditional expectations EX [g(φ(X), Y ) | Y ] from Proposition 2.2 on every minibatch in gradient descent requires impractically many samples, but EZ [h(Z, Y ) | Y ] does not depend on X and so can be pre-computed before training the network. 2.2 CONDITIONAL INDEPENDENCE REGRESSION COVARIANCE (CIRCE) The characterization (4) of conditional independence is still impractical, as it requires checking all pairs of square-integrable functions g and h. We will now transform this condition into an easy-toestimate measure that characterizes conditional independence, using kernel methods. A kernel k(x, x ) is a symmetric positive-definite function k : X X R. A kernel can be represented as an inner product k(x, x ) = ϕ(x), ϕ(x ) H for a feature vector ϕ(x) H, where H is a reproducing kernel Hilbert space (RKHS). These are spaces H of functions f : X R, with the key reproducing property ϕ(x), f H = f(x) for any f H. For M points we denote KX a row vector of ϕ(xi), such that KXx is an M 1 matrix with k(xi, x) entries and KXX is an M M matrix with k(xi, xj) entries. For two separable Hilbert spaces G, F, a Hilbert-Schmidt operator A : G F is a linear operator with a finite Hilbert-Schmidt norm A 2 HS(G,F) = X j J Agj 2 F , (5) where {gj}j J is an orthonormal basis of G (for finite-dimensional Euclidean spaces, obtained from a linear kernel, A is just a matrix and A HS its Frobenius norm). The Hilbert space HS(G, F) includes in particular the rank-one operators ψ ϕ for ψ F, ϕ G, representing outer products, [ψ ϕ]g = ψ ϕ, g G , A, ψ ϕ HS(G,F) = ψ, A ϕ F . (6) See Gretton (2022, Lecture 5) for further details. We next introduce a kernelized operator which (for RKHS functions g and h) reproduces the condition in (4), which we call the Conditional Independence Regression Covarianc E (CIRCE). Definition 2.4 (CIRCE operator). Let G be an RKHS with feature map ϕ : X G, and F an RKHS with feature map ψ : (Z Y) F, with both kernels bounded: supx ϕ(x) < , supz,y ψ(z, y) < . Let X, Y , and Z be random variables taking values in X, Y, and Z respectively. The CIRCE operator is Cc XZ|Y = E ϕ(X) ψ(Z, Y ) EZ [ψ(Z , Y ) | Y ] HS(G, F). (7) Published as a conference paper at ICLR 2023 For any two functions g G and h F, Definition 2.4 gives rise to the same expression as in (4), D Cc XZ|Y , g h E HS = E [g(X) (h(Z, Y ) EZ [h(Z , Y ) | Y ])] . (8) The assumption that the kernels are bounded in Definition 2.4 guarantees Bochner integrability (Steinwart & Christmann, 2008, Def. A.5.20), which allows us to exchange expectations with inner products as above: the argument is identical to that of Gretton (2022, Lecture 5) for the case of the unconditional feature covariance. For unbounded kernels, Bochner integrability can still hold under appropriate conditions on the distributions over which we take expectations, e.g. a linear kernel works if the mean exists, and energy distance kernels may have well-defined feature (conditional) covariances when relevant moments exist (Sejdinovic et al., 2013). Our goal now is to define a kernel statistic which is zero iff the CIRCE operator Cc XZ|Y is zero. One option would be to seek the functions, subject to a bound such as g G 1 and f F 1, that maximize (8); this would correspond to computing the largest singular value of Cc XZ|Y . For unconditional covariances, the equivalent statistic corresponds to the Constrained Covariance, whose computation requires solving an eigenvalue problem (e.g. Gretton et al., 2005a, Lemma 3). We instead follow the same procedure as for unconditional kernel dependence measures, and replace the spectral norm with the Hilbert-Schmidt norm (Gretton et al., 2005b): both are zero when Cc XZ|Y is zero, but as we will see in Section 2.3 below, the Hilbert-Schmidt norm has a simple closed-form empirical expression, requiring no optimization. Next, we show that for rich enough RKHSes G, F (including, for instance, those with a Gaussian kernel), the Hilbert-Schmidt norm of Cc XZ|Y characterizes conditional independence. Theorem 2.5. For G and F with L2-universal kernels (see, e.g., Sriperumbudur et al., 2011), Cc XZ|Y HS = 0 if and only if X Z | Y. (9) The if direction is immediate from the definition of Cc XZ|Y . The only if direction uses the fact that the RKHS is dense in L2, and therefore if (8) is zero for all RKHS elements, it must be zero for all L2 functions. See Appendix B for the proof. Therefore, minimizing an empirical estimate of Cc XZ|Y HS will approximately enforce the conditional independence we need. Definition 2.6. For convenience, we define CIRCE(X, Z, Y ) = Cc XZ|Y 2 HS. In the next two sections, we construct a differentiable estimator of this quantity from samples. 2.3 EMPIRICAL CIRCE ESTIMATE AND ITS USE AS A CONDITIONAL INDEPENDENCE REGULARIZER To estimate CIRCE, we first need to estimate the conditional expectation µZY | Y (y) = EZ [ψ(Z, y) | Y = y ]. We define2 ψ(Z, Y ) = ψ(Z) ψ(Y ), which for radial basis kernels (e.g. Gaussian, Laplace) is L2-universal for (Z, Y ).3 Therefore, µZY | Y (y) = EZ [ψ(Z) | Y = y ] ψ(y) = µZ| Y (y) ψ(y). The CIRCE operator can be written as Cc XZ|Y = E ϕ(X) ψ(Y ) ψ(Z) µZ|Y (Y ) (10) We need two datasets to compute the estimator: a holdout set of size M used to estimate conditional expectations, and the main set of size B (e.g., a mini-batch). The holdout dataset is used to estimate conditional expectation µZY | Y with kernel ridge regression. This requires choosing the ridge parameter λ and the kernel parameters for Y . We obtain both of these using leave-one-out crossvalidation; we derive a closed form expression for the error by generalizing the result of Bachmann et al. (2022) to RKHS-valued labels for regression (see Theorem C.1). 2We abuse notation in using ψ to denote feature maps of (Y, Z), Y, and Z; in other words, we use the argument of the feature map to specify the feature space, to simplify notation. 3Fukumizu et al. (2008, Section 2.2) show this kernel is characteristic, and Sriperumbudur et al. (2011, Figure 1 (3)) that being characteristic implies L2 universality in this case. Published as a conference paper at ICLR 2023 The following theorem defines an empirical estimator of the Hilbert-Schmidt norm of the empirical CIRCE operator, and establishes the consistency of this statistic as the number of training samples B, M increases. The proof and a formal description of the conditions may be found in Appendix C.2 Theorem 2.7. The following estimator of CIRCE for B points and M holdout points (for the conditional expectation): \ CIRCE = 1 B(B 1)Tr KXX KY Y ˆKc ZZ . (11) converges as Op(1/ B + 1/M (β 1)/(2(β+p))), when the regression in Equation (30) is wellspecified. KXX and KY Y are kernel matrices of X and Y ; elements of Kc ZZ are defined as Kc zz = ψ(z) µZ|Y (y), ψ(z ) µZ|Y (y ) ; β (1, 2] characterizes how well-specified the solution is and p (0, 1] describes the eigenvalue decay rate of the covariance operator over Y . The notation Op(A) roughly states that with any constant probability, the estimator is O(A). Remark. For the smoothly well-specified case we have β = 2, and for a Gaussian kernel p is arbitrarily close to zero, giving a rate Op(1/ B + 1/M 1/4). The 1/M 1/4 rate comes from conditional expectation estimation, where it is minimax-optimal for the well-specified case (Li et al., 2022). Using kernels whose eigenvalues decay slower than the Gaussian s would slow the convergence rate (see Li et al., 2022, Theorem 2). The algorithm is summarized in Algorithm 2. We can further improve the computational complexity for large training sets with random Fourier features (Rahimi & Recht, 2007); see Appendix D. Algorithm 1 Estimation of CIRCE Holdout data {(zi, yi)}M i=1, mini-batch {(xi, zi, yi)}B i=1 Holdout data Leave-one-out (Theorem C.1) for λ (ridge parameter) and σy (parameters of Y kernel): λ, σy = arg min PM i=1 ψ(zi) Kyi Y (KY Y +λI) 1KZ 2 Hz (1 (KY Y (KY Y +λ I) 1)ii) 2 W1 = (KY Y + λI) 1 , W2 = W1KZZW1 Mini-batch Compute kernel matrices Kxx, Kyy, Ky Y , Ky Z (x, y, z: mini-batch, Y, Z: holdout) ˆKc = Kyy Kzz Ky Y W1KZz (Ky Y W1KZz) + Ky Y W2KY y CIRCE = 1 B(B 1)Tr Kxx ˆKc We can use of our empirical CIRCE as a regularizer for conditionally independent regularization learning, where the goal is to learn representations that are conditionally independent of a known distractor Z. We switch from X to an encoder φθ(X). If the task is to predict Y using some loss L(φθ(X), Y ), the CIRCE regularized loss with the regularization weight γ > 0 is as follows: min θ L(φθ(X), Y ) + γ CIRCE(φθ(X), Z, Y ) . (12) 3 RELATED WORK We review prior work on kernel-based measures of conditional independence to determine or enforce X Z| Y, including those measures we compare against in our experiments in Section 4. We begin with procedures based on kernel conditional feature covariances. The conditional kernel cross-covariance was first introduced as a measure of conditional dependence by Sun et al. (2007). Following this work, a kernel-based conditional independence test (KCI) was proposed by Zhang et al. (2011). The latter test relies on satisfying Proposition 2.2 leading to a statistic4 that requires regression of φ(X) on Y in every minibatch (as well as of Z on Y , as in our setting). More 4The conditional-independence test statistic used by KCI is 1 B Tr K X|Y KZ|Y , where X = (X, Y ) and K is a centered kernel matrix. Unlike CIRCE, K X|Y requires regressing X on Y using kernel ridge regression. Published as a conference paper at ICLR 2023 recently, Quinzan et al. (2022) introduced a variant of the Hilbert-Schmidt Conditional Independence Criterion (HSCIC; Park & Muandet, 2020) as a regularizer to learn a generalized notion of counterfactually-invariant representations (Veitch et al., 2021). Estimating HSCIC(X, Z|Y ) from finite samples requires estimating the conditional mean-embeddings µX,Z|Y , µX|Y and µZ|Y via regressions (Grunewalder et al., 2012). HSCIC requires three times as many regressions as CIRCE, of which two must be done online in minibatches to account for the conditional cross-covariance terms involving X. We will compare against HSCIC in experiements, being representative of this class of methods, and having been employed successfully in a setting similar to ours. Alternative measures of conditional independence make use of additional normalization over the measures described above. The Hilbert-Schmidt norm of the normalized cross-covariance was introduced as a test statistic for conditional independence by Fukumizu et al. (2008), and was used for structure identification in directed graphical models. Huang et al. (2022) proposed using the ratio of the maximum mean discrepancy (MMD) between PX|ZY and PX|Y , and the MMD between the Dirac measure at X and PX|Y , as a measure of the conditional dependence between X and Z given Y . The additional normalization terms in these statistics can result in favourable asymptotic properties when used in statistical testing. This comes at the cost of increased computational complexity, and reduced numerical stability when used as regularizers on minibatches. Another approach, due to Shah & Peters (2020), is the Generalized Covariance Measure (GCM). This is a normalized version of the covariance between residuals from kernel-ridge regressions of X on Y and Z on Y (in the multivariate case, a maximum over covariances between univariate regressions is taken). As with the approaches discussed above, the GCM also involves multiple regressions one of which (regressing X on Y ) cannot be done offline. Since the regressions are univariate, and since GCM simply regresses Z and X on Y (instead of ψ(Z, Y ) and ϕ(X) on Y ), we anticipate that GCM might provide better regularization than HSCIC on minibatches. This comes at a cost, however, since by using regression residuals rather than conditionally centered features, there will be instances of conditional dependence that will not be detectable. We will investigate this further in our experiments. 4 EXPERIMENTS We conduct experiments addressing two settings: (1) synthetic data of moderate dimension, to study effectiveness of CIRCE at enforcing conditional independence under established settings (as envisaged for instance in econometrics or epidemiology); and (2) high dimensional image data, with the goal of learning image representations that are robust to domain shifts. We compare performance over all experiments with HSCIC (Quinzan et al., 2022) and GCM (Shah & Peters, 2020). 4.1 SYNTHETIC DATA Figure 1: Causal structure for synthetic datasets. We first evaluate performance on the synthetic datasets proposed by Quinzan et al. (2022): these use the structural causal model (SCM) shown in Figure 1, and comprise 2 univariate and 2 multivariate cases (see Appendix E for details). Given samples of A, Y and Z, the goal is to learn a predictor ˆB = φ(A, Y, Z) that is counterfactually invariate to Z. Achieving this requires enforcing conditional independence φ(A, Y, Z) Z|Y . For all experiments on synthetic data, we used a fully connected network with 9 hidden layers. The inputs of the network were A, Y and Z. The task is to predict B and the network is learned with the MSE loss. For each test case, we generated 10k examples, where 8k were used for training and 2k for evaluation. Data were normalized with zero mean and unit standard deviation. The rest of experimental details is provided in Appendix E. We report in-domain MSE loss, and measure the level of counterfactual invariance of the predictor using the VCF (Quinzan et al., 2022, eq. 4; lower is better). Given X = (A, Y, Z), VCF := Ex X h Vz Z h E ˆ B z |X h ˆB|X = x iii . (13) P ˆ B z |X is the counterfactual distribution of ˆB given X = x and an intervention of setting z to z . Published as a conference paper at ICLR 2023 Univariate Cases Table 1 summarizes the in-domain MSE loss and VCF comparing CIRCE to baselines. Without regularization, MSE loss is low in-domain but the representation is not invariant to changes of Z. With regularization, all three methods successfully achieve counterfactual invariance in these simple settings, and exhibit similar in-domain performance. Case No Reg GCM HSCIC CIRCE MSE VCF MSE VCF MSE VCF MSE VCF 1 2.03e-4 0.180 0.198 2.59e-06 0.197 2.08e-11 0.197 8.77e-08 2 0.027 0.258 1.169 9.07e-07 1.168 3.08e-11 1.168 7.37e-11 Table 1: MSE loss and VCF for univariate synthetic datasets. Comparison of representation without conditional independence regularization against regularization with GCM, HSCIC and CIRCE. Multivariate Cases We present results on 2 multivariate cases: case 1 has high dimensional Z and case 2 has high dimensional Y . For each multivariate case, we vary the number of dimensions d = {2, 5, 10, 20}. To visualize the trade-offs between in-domain performance and invariant representation, we plot the Pareto front of MSE loss and VCF. With high dimensional Z (Figure 2A), CIRCE and HSCIC have a similar trade-off profile, however it is notable that GCM needs to sacrifice more in-domain performance to achieve the same level of invariance. This may be because the GCM statistic is a maximum over normalized covariances of univariate residuals, which can be less effective in a multivariate setting. For high dimensional Y (Figure 2B), the regression from Y to ψ(Z) is much harder. We observe that HSCIC becomes less efficient with increasing d until at d = 20 it fails completely, while GCM still sacrifices more in-domain performance than CIRCE. Figure 2: Pareto front of MSE and VCF for multivariate synthetic dataset. A: case 1; B: case 2. 4.2 IMAGE DATA Figure 3: Causal structure for d Sprites and Yale-B. Dashed line denotes a non-causal association between nodes. We next evaluate our method on two high-dimensional image datasets: d-Sprites (Matthey et al. (2017)) which contains images of 2D shapes generated from six independent latent factors; and the Extended Yale-B Face dataset 5(Georghiades et al. (2001)) of faces of 28 individuals under varying camera poses and illumination. We use both datasets with the causal graph in Figure 3 where the image X is directly caused by the target variable Y and a distractor Z. There also exists a strong non-causal association between Y and Z in the training set (denoted by the dashed edge). The basic setting is as follows: for the in-domain (train) samples, the observed Y and Z are correlated through the true Y as Y PY , ξz N(0, σz) , Z = β(Y ) + ξz , (14) Y = Y + ξy , ξy N(0, σy) , Z = fz(Y, Z, ξz) , X = fx(Y , Z ) . (15) 5Google and Deep Mind do not have access or handle the Yale-B Face dataset. Published as a conference paper at ICLR 2023 Y and Z are observed; fz is the structural equation for Z (in the simplest case Z = Z); fx is the generative process of X. Y and Z represent noise added during generation and are unobserved. A regular predictor would take advantage of the association β between Z and Y during training, since this is a less noisy source of information on Y . For unseen out-of-distribution (OOD) regime, where Y and Z are uncorrelated, such solution would be incorrect. Therefore, our task is to learn a predictor ˆY =φ(X) that is conditionally independent of Z: φ(X) Z| Y , so that during the OOD/testing phase when the association between Y and Z ceases to exist, the model performance is not harmed as it would be if φ(X) relied on the shortcut Z to predict Y . For all image experiments we use the Adam W (Loshchilov & Hutter (2019)) optimizer and anneal the learning rate with a cosine scheduler (details in Appendix F). We select the hyper-parameters of the optimizer and scheduler via a grid search to minimize the in-domain validation set loss. 4.2.1 DSPRITES Of the six independent generative factors in d-Sprites, we choose the y-coordinate of the object as our target Y and the x-coordinate of the object in the image as our distractor variable Z. Our neural network consists of three convolutional layers interleaved with max pooling and leaky Re LU activations, followed by three fully-connected layers with 128, 64, 1 unit(s) respectively. Linear dependence We sample images from the dataset as per the linear relation Z = Z = Y +ξz. We then translate all sampled images (both in-domain and OOD) vertically by ξy, resulting in an observed object coordinate of (Z, Y + ξy). In this case, linear residual methods, such as GCM, are able to sufficiently handle the dependence as the residual Z E [Z | Y ] = ξz is correlated with Z which is the observed x-coordinate. As a result, penalizing the cross-covariance between φ(X) E [φ(X) | Y ] and Z E [Z | Y ] will also penalize the network s dependence on the observed x-coordinate to predict Y . 0 100 101 102 103 104 regularization strength in-domain OOD trained on OOD 0 101 102 103 regularization strength in-domain OOD trained on OOD 0 10 2 10 1 regularization strength in-domain OOD trained on OOD Figure 4: d Sprites (linear). Blue: in-domain test loss; orange: out-of-domain loss (OOD); red: loss for OOD-trained encoder. Solid lines: median over 10 seeds; shaded areas: min/max values. In Figure 4 we plot the in-domain and OOD losses over a range of regularization strengths and demonstrate that indeed GCM is able to perform quite well with a linear function relating Z to Y . CIRCE is comparable to GCM with strong regularization and outperforms HSCIC. To get the optimal OOD baseline we train our network on an OOD training set where Y and Z are uncorrelated. Non-linear dependence To demonstrate the limitation of GCM, which simply regresses Z on Y instead of ψ(Z, Y ) on Y , we next address a more complex nonlinear dependence β(Y ) = 0 and Z = Y + α Z2. The observed coordinate of the object in the image is (Y + αξ2 z, Y + ξy) . For a small α, the unregularized network will again exploit the shortcut, i.e. the observed x-coordinate, in order to predict Y . The linear residual, if we don t use features of Z, is Z E [Z | Y ] = ξz, which is uncorrelated with Y + αξ2 z, because E [ξ3 z] = 0 due to the symmetric and zero-mean distribution of ξz. As a result, penalizing cross-covariance with the linear residual (as done by GCM) will not penalize solutions that use the observed x-coordinate to predict Y . Whereas CIRCE which uses a feature map ψ(Z) can capture higher order features. Results are shown in Figure 5: we see again that CIRCE performs best, followed by HSCIC, with GCM doing poorly. Curiously, GCM performance does still improve slightly on OOD data as regularization increases - we conjecture that the encoder φ(X) may extract non-linear features of the coordinates. However, GCM is numerical unstable for large regularization weights, which might arise from combining a ratio normalization and a max operation in the statistic. Published as a conference paper at ICLR 2023 0 100 101 102 103 regularization strength in-domain OOD trained on OOD 0 101 102 103 regularization strength in-domain OOD trained on OOD 0 10 2 10 1 100 regularization strength in-domain OOD trained on OOD Figure 5: d Sprites (non-linear). Blue: in-domain test loss; orange: out-of-domain loss (OOD); red: loss for OOD-trained encoder. Solid lines: median over 10 seeds; shaded areas: min/max values. 4.2.2 EXTENDED YALE-B Finally, we evaluate CIRCE as a regressor for supervised tasks on the natural image dataset of Extended Yale-B Faces. The task here is to estimate the camera pose Y from image X while being conditionally independent of the illumination Z which is represented as the azimuth angle of the light source with respect to the subject. Since, these are natural images, we use the Res Net-18 (He et al., 2016) model pre-trained on Image Net (Deng et al., 2009) to extract image features, followed by three fully-connected layers containing 128, 64 and 1 unit(s) respectively. Here we sample the training data according to the non-linear relation Z = Z = 0.5(Y + εY 2), where ε is either +1 or 1 with equal probability. In this case E [Z | Y ] = 0.5Y + 0.5Y 2 E [ε | Y ] = 0.5Y, and thus the linear residuals depend on Y . (In experiments, Y and ε are re-scaled to be in the same range. We avoid it here for simplicity.) Note that GCM can in principle find the correct solution using a linear decoder. Results are shown in Figure 6. CIRCE shows a small advantage over HSCIC in OOD performance for the best regularizer choice. GCM suffers from numerical instability in this example, which leads to poor performance. 0 101 102 103 regularization strength in-domain OOD trained on OOD 0 101 102 103 regularization strength in-domain OOD trained on OOD 0 10 2 10 1 regularization strength in-domain OOD trained on OOD Figure 6: Yale-B. Blue: in-domain test loss; orange: out-of-domain loss (OOD); red: loss for OODtrained encoder. Solid lines: median over 10 seeds; shaded areas: min/max values. 5 DISCUSSION We have introduced CIRCE: a kernel-based measure of conditional independence, which can be used as a regularizer to enforce conditional independence between a network s predictions and a pre-specified variable with respect to which invariance is desired. The technique can be used in many applications, including fairness, domain invariant learning, and causal representation learning. Following an initial regression step (which can be done offline), CIRCE enforces conditional independence via a marginal independence requirement during representation learning, which makes it well suited to minibatch training. By contrast, alternative conditional independence regularizers require an additional regression step on each minibatch, resulting in a higher variance criterion which can be less effective in complex learning tasks. As future work, it will be of interest to determine whether or not CIRCE is statistically significant on a given dataset, so as to employ it as a statistic for a test of conditional dependence. Published as a conference paper at ICLR 2023 ACKNOWLEDGMENTS This work was supported by Deep Mind, the Gatsby Charitable Foundation, the Wellcome Trust, the Canada CIFAR AI Chairs program, the Natural Sciences and Engineering Resource Council of Canada, SHARCNET, Calcul Qu ebec, the Digital Resource Alliance of Canada, and Open Philanthropy. Finally, we thank Alexandre Drouin and Denis Therien for the Bellairs Causality workshop which sparked the project. Gregor Bachmann, Thomas Hofmann, and Aur elien Lucchi. Generalization through the lens of leave-one-out error. In ICLR, 2022. Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, and Ed H. Chi. Putting fairness principles into practice: Challenges, metrics, and improvements. In AIES, 2019. JJ Daudin. Partial association measures and an application to qualitative regression. Biometrika, 67 (3):581 590, 1980. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Image Net: A large-scale hierarchical image database. In CVPR, pp. 248 255, 2009. Simon Fischer and Ingo Steinwart. Sobolev norm learning rates for regularized least-squares algorithms. JMLR, 21:205 1, 2020. K. Fukumizu, A. Gretton, X. Sun, and B. Sch olkopf. Kernel measures of conditional dependence. In Neur IPS, 2008. A.S. Georghiades, P.N. Belhumeur, and D.J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE T-PAMI, 23(6):643 660, 2001. Karan Goel, Albert Gu, Yixuan Li, and Christopher R e. Model patching: Closing the subgroup performance gap with data augmentation. In ICLR, 2021. A. Gretton. Introduction to RKHS, and some simple kernel algorithms. Lecture Notes, Gatsby Computational Neuroscience Unit, 2022. URL http://www.gatsby.ucl.ac.uk/ gretton/coursefiles/rkhscourse.html. A. Gretton, R. Herbrich, A. J. Smola, O. Bousquet, and B. Sch olkopf. Kernel methods for measuring independence. JMLR, 6:2075 2129, 2005a. Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Sch olkopf. Measuring statistical dependence with Hilbert-Schmidt norms. In ALT, pp. 63 77, 2005b. S. Grunewalder, G. Lever, L. Baldassarre, S. Patterson, A. Gretton, and M. Pontil. Conditional mean embeddings as regressors. In ICML, 2012. Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CVPR, pp. 770 778, 2016. Zhen Huang, Nabarun Deb, and Bodhisattva Sen. Kernel partial correlation coefficient a measure of conditional dependence. JMLR, 23(216):1 58, 2022. Yibo Jiang and Victor Veitch. Invariant and transportable representations for anti-causal domain shifts, 2022. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. I. Klebanov, I. Schuster, and T.J. Sullivan. A rigorous theory of conditional mean embeddings. SIAM Journal on Mathematics of Data Science, 2(3):583 606, 2020. Published as a conference paper at ICLR 2023 Zhu Li, Dimitri Meunier, Mattes Mollenhauer, and Arthur Gretton. Optimal rates for regularized conditional mean embedding learning. ar Xiv preprint ar Xiv:2208.01711, 2022. Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Neur IPS, volume 31, 2018. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR, 2019. Maggie Makar, Ben Packer, Dan Moldovan, Davis Blalock, Yoni Halpern, and Alexander D Amour. Causally motivated shortcut removal using auxiliary labels. In AISTATS, 2022. Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset, 2017. URL https://github.com/deepmind/ dsprites-dataset/. Colin Mc Diarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148 188, 1989. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Comput. Surv., 54(6), 2021. M. Mollenhauer and P. Koltai. Nonparametric approximation of conditional expectation operators. ar Xiv preprint ar Xiv:2012.12917, 2020. Junhyung Park and Krikamol Muandet. A measure-theoretic approach to kernel conditional mean embeddings. In Neur IPS, 2020. Aahlad Manas Puli, Lily H Zhang, Eric Karl Oermann, and Rajesh Ranganath. Out-of-distribution generalization in the presence of nuisance-induced spurious correlations. In ICLR, 2022. Francesco Quinzan, Cecilia Casolo, Krikamol Muandet, Niki Kilbertus, and Yucen Luo. Learning counterfactually invariant predictors. ar Xiv preprint ar Xiv:2207.09768, 2022. Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Neur IPS, 2007. D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu. Equivalence of distance-based and rkhs-based statistics in hypothesis testing. Annals of Statistics, 41(5):2263 2702, 2013. Rajen D Shah and Jonas Peters. The hardness of conditional independence testing and the generalised covariance measure. The Annals of Statistics, 48(3):1514 1538, 2020. L. Song, J. Huang, A. J. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions. In ICML, 2009. B. Sriperumbudur, K. Fukumizu, and G. Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. JMLR, 12:2389 2410, 2011. Ingo Steinwart and Andreas Christmann. Support Vector Machines. Information Science and Statistics. Springer, 2008. X. Sun, D. Janzing, B. Sch olkopf, and K. Fukumizu. A kernel-based causal learning algorithm. In ICML, pp. 855 862, 2007. Remi Tachet des Combes, Han Zhao, Yu-Xiang Wang, and Geoffrey J Gordon. Domain adaptation with conditional distribution matching and generalized label shift. Neur IPS, 2020. Victor Veitch, Alexander D Amour, Steve Yadlowsky, and Jacob Eisenstein. Counterfactual invariance to spurious correlations in text classification. In Neur IPS, 2021. Zihao Wang and Victor Veitch. A unified causal view of domain invariant representation learning. In ICML Workshop on Spurious Correlations, Invariance and Stability, 2022. Kun Zhang, Jonas Peters, Dominik Janzing, and Bernhard Sch olkopf. Kernel-based conditional independence test and application in causal discovery. In UAI, 2011. Published as a conference paper at ICLR 2023 A CONDITIONAL INDEPENDENCE DEFINITIONS We first repeat the proof of the main theorem in Daudin 1980, as the missing proofs we need for the alternative definitions of independence rely on the main one. Theorem A.1 (Theorem 1 of Daudin 1980). Define E1 = {g : g L2 XY , E [g | Y ] = 0}, E2 = {h : h L2 Y Z, E [h | Y ] = 0}. Then, the following two conditions are equivalent: E [g1h1] = 0 g1 E1, h1 E2 , E [gh | Y ] = E [g | Y ] E [h | Y ] g L2 XY , h L2 Y Z . Proof. Necessary condition: E [gh | Y ] = E [g | Y ] E [h | Y ] = E[g1h1] = 0 Because E1 L2 XY and E2 L2 Y Z, for g1 E1 and h1 E2 we have E [g1h1 | Y ] = E [g1 | Y ] E [h1 | Y ] = 0 = E[g1h1] = EY [E [g1h1 | Y ]] = 0 . Sufficient condition: E[g1h1] = 0 = E [gh | Y ] = E [g | Y ] E [h | Y ] Let g = g E [g | Y ] where g L2 XY and h = h E [h | Y ] where h L2 XY . Then, g E1 and h E2 E[g h ] = E [(g E [g | Y ])(h E [h | Y ])] = E [gh h E [g | Y ] g E [h | Y ] + E [g | Y ] E [h | Y ]] = EY [E [(gh h E [g | Y ] g E [h | Y ] + E [g | Y ] E [h | Y ]) | Y ]] = EY [E [gh | Y ] E [g | Y ] E [h | Y ]] = 0 . (16) Let B be a Borel set of the image space of Y , g = g IB where IB is an indicator function of B. We have R g 2d P = R g2IBd P = R B g2d P R g2d P < , therefore g L2 XY . Using Equation (16), EY [E [g h | Y ] E [g | Y ] E [h | Y ]] = EY [E [gh IB | Y ] E [g IB | Y ] E [h | Y ]] B E [gh | Y ] d P Z B E [g | Y ] E [h | Y ] d P = 0 So E [gh | Y ] = E [g | Y ] E [h | Y ] almost surely. Corollary A.2 (Equation 3.8 of Daudin 1980). The following two conditions are equivalent: E [gh1] = 0 g L2 XY , h1 E2 , E [gh | Y ] = E [g | Y ] E [h | Y ] g L2 XY , h L2 Y Z . Proof. Necessary condition is identical to the previous proof. Sufficient condition: E[gh1] = 0 = E [gh | Y ] = E [g | Y ] E [h | Y ] Let h = h E [h | Y ] where h L2 Y Z, then h E2 E[gh ] = E[g(h E [h | Y ])] = E[gh g E [h | Y ]] = EY [E [(gh g E [h | Y ]) | Y ]] = EY [E [gh | Y ] E [g E [h | Y ] | Y ]] = EY [E [gh | Y ] E [g | Y ] E [h | Y ]] = 0 . Using the same argument as for Theorem A.1, E [gh | Y ] = E [g | Y ] E [h | Y ] almost surely. Published as a conference paper at ICLR 2023 Corollary A.3 (Equation 3.9 of Daudin 1980). The following two conditions are equivalent: E [g h1] = 0 g L2 X, h1 E2 , E [gh | Y ] = E [g | Y ] E [h | Y ] g L2 XY , h L2 Y Z . Proof. Necessary condition: As E2 L2 Y Z and L2 X L2 XY , E [g h1 | Y ] = E [g | Y ] E [h1 | Y ] = 0 . Sufficient condition: E[g h1] = 0 = E [gh | Y ] = E [g | Y ] E [h | Y ] Take a simple function ga = Pn i=1 ai IAi for an integrable Borel set Ai in XY . As integrable simple functions are dense in L2 XY , we only need to prove the condition for all ga. In our case, the indicator function decomposes as IAi = IAX i IAY i , and therefore for gi = ai IAX i i gi IAY i . E[gah1] = E i=1 IAY i E [gih1 | Y ] i=1 IAY i 0 As simple functions are dense in L2 XY , we immediately have E[gh1] = 0 g L2 XY , h1 E2. Applying Corollary A.2 concludes the proof. B CIRCE DEFINITION First, we need a more convenient function class: Lemma B.1. The function class E2 = h L2 ZY , E [h | Y ] = 0 coincides with the function class E 2 = h = h E [h | Y ] , h L2 ZY . Proof. E2 E 2: any h E2 is in L2 ZY and has the form h = h E [h | Y ] by construction because the last term is zero. E 2 E2: first, any h E 2 satisfies E [h | Y ] = 0 by construction. Second, Z (h )2 dµ(Z, Y ) = Z (h E [h | Y ])2 dµ(Z, Y ) (17) = Z h2 2 h E [h | Y ] + (E [h | Y ])2 dµ(Z, Y ) (18) = Z h2 (E [h | Y ])2 dµ(Z, Y ) < + , (19) as h L2 ZY and the second term is non-positive. Proof of Theorem 2.5. For the if direction, we simply pull out the Y expectation in the definition of the CIRCE operator and apply conditional independence: Cc XZ|Y = EY h EX[ϕ(X) | Y ] EZ[ψ(Z, Y ) | Y ] EZ [ψ(Z , Y ) | Y ] For the other direction, first, Cc XQ HS = 0 implies that for any g G and h F, E [g (h E [h | Y ])] = 0 (20) Published as a conference paper at ICLR 2023 by Cauchy-Schwarz. Now, we use that an L2-universal kernel is dense in L2 by definition (see Sriperumbudur et al. (2011)). Therefore, for any g L2 X and h L2 ZY , for any ϵ > 0 we can find gϵ G and hϵ F such that g gϵ 2 ϵ, h hϵ 2 ϵ . (21) For the L2 function, we can now write the conditional independence condition as E [g (h E [h | Y ])] = E [(g gϵ) (h hϵ E [h hϵ | Y ])] (22) = 0 + E [(g gϵ) (h hϵ E [h hϵ | Y ])] (23) + E [gϵ (h hϵ E [h hϵ | Y ])] E [(g gϵ) (hϵ E [hϵ | Y ])] . (24) The first term is zero because Cc XQ HS = 0. For the rest, we need to apply Cauchy-Schwarz: E [(g gϵ) (h hϵ)] g gϵ 2 h hϵ 2 ϵ2 (25) E [(g gϵ) (E [h hϵ | Y ])] g gϵ 2 h hϵ 2 ϵ2 , (26) where in the last inequality we used that E h (E [X | H ])2i E X2 for conditional expectations. Similarly, also using the reverse triangle inequality, E [gϵ (h hϵ)] ϵ gϵ 2 ϵ ( g 2 + ϵ) . (27) Repeating this calculation for the rest of the terms, we can finally apply the triangle inequality to show that |E [g (h E [h | Y ])]| 2 ϵ2 + 2 ϵ ( g 2 + ϵ) + 2 ϵ ( h 2 + ϵ) (28) = 2 ϵ (3 ϵ + g 2 + h 2) . (29) As g 2 and h 2 are fixed and finite, we can make the bound arbitrary small, and hence E [g (h E [h | Y ])] = 0. C PROOFS FOR ESTIMATORS C.1 ESTIMATING THE CONDITIONAL MEAN EMBEDDING We will construct an estimate of the term EZ [ψ(Z, Y ) | Y ] that appears inside CIRCE, as a function of Y . We summarize the established results on conditional feature mean estimation: see (Grunewalder et al., 2012; Park & Muandet, 2020; Mollenhauer & Koltai, 2020; Klebanov et al., 2020; Li et al., 2022) for further details. To learn E [ψ(Q) | Y ] for some feature map ψ(q) HQ and random variable Q (both to be specified shortly), we can minimize the following loss: ˆµQ|Y,λ(y) = arg min F GQY i=1 ψ(qi) F(yi) 2 HQ + λ F 2 GQY , (30) where GQY is the space of functions from Y to HQ. The above solution is said to be well-specified when there exists a Hilbert-Schmidt operator A HS(HY , HQ) such that F (y) = A ψ(y) for all y Y, where HY is the RKHS on Y with feature map ψ(y) (Li et al., 2022). We now consider the case relevant to our setting, where Q := (Z, Y ). We define6 ψ(Z, Y ) = ψ(Z) ψ(Y ), which for radial basis kernels (e.g. Gaussian, Laplace) is L2-universal for (Z, Y ).7 We then write EZ [ψ(Z, y) | Y = y ] = EZ [ψ(Z) | Y = y ] ψ(y). The conditional feature mean E [ψ(Z) | Y ] can be found with kernel ridge regression (Grunewalder et al., 2012; Li et al., 2022): µZ| Y (y) E [ψ(Z) | Y ] (y) Ky Y (KY Y + λI) 1 KZ (31) 6We abuse notation in using ψ to denote feature maps of (Y, Z), Y, and Z; in other words, we use the argument of the feature map to specify the feature space, to simplify notation. 7Fukumizu et al. (2008, Section 2.2) show this kernel is characteristic, and Sriperumbudur et al. (2011, Figure 1 (3)) that being characteristic implies L2 universality in this case. Published as a conference paper at ICLR 2023 where KZ indicates a matrix with rows ψ(zi), (KY Y )i,j = k(yi, yj), and (Ky Y )i = k(y, yi). Note that we have used the argument of k to identify which feature space it pertains to i.e., the kernel on Z need not be the same as that on Y . We can find good choices for the Y kernel and the ridge parameter λ by minimizing the leave-oneout cross-validation error. In kernel ridge regression, this is almost computationally free, based on the following version of a classic result for scalar-valued ridge regression. The proof generalizes the proof of Theorem 3.2 of Bachmann et al. (2022) to RKHS-valued outputs. Theorem C.1 (Leave-one-out for kernel mean embeddings). Denote the predictor trained on the full dataset as FS, and the one trained without the i-th point as F i. For λ > 0 and A KY Y (KY Y + λ I) 1, the leave-one-out (LOO) error for Equation (30) is i=1 ψ(zi) F i(yi) 2 HZ = 1 ψ(zi) FS(yi) 2 HZ (1 Aii)2 . (32) Proof. Denote the full dataset S = {(yi, zi)}M i=1; the dataset missing the i-th point is denoted S i. Prediction on the full dataset takes the form F(Y ) = AKZ . Consider the prediction obtained without the M-th point (w.l.o.g.) but evaluated on y M: F M(y M). Define a new dataset Z = S M {(y M, F M(y M))} and compute the loss for it: i=1 ψ(zi) F M(yi) 2 HZ + F M(y M) F M(y M) 2 HZ + λ F M 2 GZY (33) = LS M (F M) LS M (F) LS M (F) + F M(y M) F(y M) 2 HZ LZ(F) , (34) where the first inequality is due to F M minimizing LS M . Therefore, F M also minimizes LZ. As A in the prediction expression FS(Y ) = AKZ depends only on Y , and not on Z, F M has to have the same form as the full prediction: F M(Y ) = AK Z , K zi, = ψ(zi), i < M , F M(y M), i = M . (35) This allows us to solve for F M(y M): F M(y M) = Ky MY (KY Y + λ I) 1 K Z = i=1 AMiψ(zi) (36) i=1 AMiψ(zi) + AMMψ(zi) AMMψ(z M) (37) i=1 AMiψ(zi) AMMψ(z M) + AMMψ(zi) (38) = FS(y M) AMMψ(z M) + AMMψ(zi) (39) = FS(y M) AMMψ(z M) + AMMF M(y M) . (40) As AMM is a scalar, we can solve for F M(y M): F M(y M) = FS(y M) AMMψ(z M) ψ(z M) F M(y M) = (1 AMM)ψ(z M) FS(y M) + AMMψ(z M) = ψ(z M) FS(y M) 1 AMM . (43) Taking the norm and summing this result over all points (not just M) gives the LOO error. Published as a conference paper at ICLR 2023 C.2 CIRCE ESTIMATORS Lemma C.2. For B points and Kc zz = ψ(z) E [Z | Y ] (y), ψ(z ) E [Z | Y ] (y ) , the CIRCE estimator \ Cc XZ|Y 2 HS = 1 B(B 1)Tr (KXX(KY Y Kc ZZ)) (44) has O(1/B) bias and Op(1/ B) deviation from the mean for any fixed probability of the deviation. Proof. The bias is straightforward: 1 B(B 1) E [Tr (KXX(KY Y Kc ZZ))] = 1 B(B 1) E i,j =i Kxixj Kyiyj Kc zizj + 1 B(B 1) E i Kxixi Kyiyi Kc zizi i,j =i Exx yy zz [Kxx Kyy Kc zz ] + O 1 = Cc XQ 2 HS + O 1 For the variance, first note that our estimator has bounded differences. Denote KT T = KY Y Kc ZZ and t = (y, z), if we switch one datapoint (xi, ti) to (x i, t i) and denote the vectors with switch coordinates as Xi, T i |Tr (KXXKT T ) Tr (KXi Xi KT i T i)| Kxixi Ktiti Kx ix i Kt it i + 2 X Kxjxi Ktjti Kxjx i Ktjt i (2 + 4(B 1))Kx max Kt max (4B 2)Kx max Ky max Kc z max . Therefore, for any index i 1 B(B 1) |Tr (KXX (KY Y Kc ZZ)) Tr (KXi Xi (KY i Y i Kc Zi Zi))| B(B 1)Kx max Ky max Kc z max. We can now use Mc Diarmid s inequality (Mc Diarmid, 1989) with c = ci = 4B 2 B(B 1)Kx max Ky max Kc z max , meaning that for any ϵ > 0 P Tr (KXXKT T ) B(B 1) E Tr (KXXKT T ) ϵ 2 exp 2ϵ2 = 2 exp 2ϵ2B(B 1)2 (4B 2)2K2x max K2y max K2c z max Therefore, for any fixed probability the deviation ϵ from the mean decays as O(1/ Definition C.3. A (β, p)-kernel for a given data distribution satisfies the following conditions (see Fischer & Steinwart (2020); Li et al. (2022) for precise definition using interpolation spaces): (EVD) Eigenvalues µi of the covariance operator CY Y decay as µi c i 1/p. (EMB) For α (p, 1], the inclusion map [Hα Y , L (π)] is continuous and bounded by A. Published as a conference paper at ICLR 2023 (SRC) F [G]β for β [1, 2] (note that β < 1 would include the misspecified setting). Lemma C.4. Consider the well-specified case of conditional expectation estimation (see Li et al., 2022). For bounded kernels over X, Z, Y and a (β, p)-kernel over Y , F(y) = E [ψ(Z) | Y ] (y), bounded F CF , and M points used to estimate F, define the conditional expectation estimate as ˆF(y) = Ky Y (KY Y + λMI) 1 KZ , (45) where λM = Θ(1/M β+p). Then, the estimator Tr KXX ˆKc ZZ /(B(B 1)) of the true CIRCE estimator (i.e., with the actual conditional expectation) deviates from the true value as Op(1/M (β 1)/(2(β+p))). Proof. First, decompose the difference: Tr (KXXKc ZZ) Tr KXX ˆKc ZZ = Tr (KXX (Kc ZZ Kc ZZ)) (46) = Tr KXX h Kc ZZ ˆKc ZZ KY Y i = Tr [KXX KY Y ] Kc ZZ ˆKc ZZ , (47) where in the last line we used that all matrices are symmetric. Let s concentrate on the difference: Kc ZZ ˆKc ZZ ij = D ˆF(yi) F(yi), ψ(zj) E + D ˆF(yj) F(yj), ψ(zi) E (48) + F(yi), F(yj) D ˆF(yi), ˆF(yj) F(yj) E (49) = D ˆF(yi) F(yi), ψ(zj) E + D ˆF(yj) F(yj), ψ(zi) E (50) + D F(yi) ˆF(yi), F(yj) E D ˆF(yi), ˆF(yj) F(yj) E (51) = D F(yi) ˆF(yi), F(yj) ψ(zj) E + D F(yj) ˆF(yj), ˆF(yi) ψ(zj) E . (52) As we re working in the well-specified case, by definition the operator F G, where G is a vectorvalued RKHS (Li et al., 2022, Definition 1). This implies that for the function [Kxh]( ) = K( , x)h (where h Hy), F(x), h = F, Kxh G . (53) We can now re-write the difference as Kc ZZ ˆKc ZZ ij = D F ˆF, Kyi (F(yj) ψ(zj)) + Kyj ˆF(yi) ψ(zj) E We can use the triangle inequality and then Cauchy-Schwarz to obtain Kc ZZ ˆKc ZZ Kyi (F(yj) ψ(zj)) G + Kyj ˆF(yi) ψ(zj) G = F ˆF G k(yi, yi) F(yj) ψ(zj) HZ + k(yj, yj) ˆF(yi) ψ(zj) HZ (56) C1 F ˆF G C2 + C3 F ˆF G , (57) for some positive constants C1,2,3 (since the kernels over both z and y are bounded, F is bounded too and hence ˆF ˆF F + F . As all kernels are bounded, Tr [KXX KY Y ] Kc ZZ ˆKc ZZ B(B 1) C1C4 F ˆF G C2 + C3 F ˆF G (58) for positive constants C1 to C4. Published as a conference paper at ICLR 2023 Now we can use Theorem 2 of Li et al. (2022) with γ = 1 and λ = Θ(1/M β+p), which shows that KM β 1 2(β+p) 1 4e τ , (59) for some positive constant K, which gives us the Op(1/M β 1 2(β+p) ) deviation. Now we can combine the two lemmas to prove Theorem 2.7: Proof of Theorem 2.7. Combining Lemma C.2 and Lemma C.4 and using a union bound, we obtain the Op(1/ β 2(β+p) ) rate. Corollary C.5. For B points and M holdout points, the CIRCE estimator \ CIRCE = 1 B(B 1)Tr KXX KY Y ˆ Kc ZZ , A = A diag(A) , (60) converges as Op(1/ β 1 2(β+p) ). Proof. This follows from the previous two proofs. Corollary C.6. For B points and M holdout points, the CIRCE estimator \ CIRCE = 1 B(B 1)Tr HKXXH KY Y ˆKc ZZ , H = I 1 B 1B1 B (61) has bias of O(1/B) and converges as Op(1/ β 1 2(β+p) ). Proof. This follows from the previous two proofs and the fact that Kc is a centered matrix, meaning that in expectation HKc H = Kc. This estimator can be less biased in practice, as ˆKc ZZ is typically biased due to conditional expectation estimation, and H ˆKc H re-centers it. D RANDOM FOURIER FEATURES Random Fourier features (RFF) Rahimi & Recht (2007) allow to approximate a kernel k(x1, x2) 1 D PD i=1 ri(x1) ri(x2), and therefore K = RR . The algorithm to estimate CIRCE with RFF is provided in Algorithm 2. We sample D0 points every L iterations, but in every batch only use D of them to reduce computational costs. It takes O(D0M 2 + D2 0M) to compute W r 1 and W r 2 every L iterations. At each iteration, it takes O(BD2 + B2D) to compute CIRCE. Therefore, average (per iteration) cost of RFF estimation becomes O( D0 L M 2 + D2 0 L M + BD2 + B2D). E SYNTHETIC DATA AND ADDITIONAL RESULTS We used Adam (Kingma & Ba, 2015) for optimization with batch size 256, and trained the network for 100 epochs. For experiments on univariate datasets, the learning rate was 1e-4 and weight decay was 0.3; for experiments on multivariate datasets, the learning rate was 3e-4 and weight decay was 0.1. We implemented CIRCE with random Fourier features (Rahimi & Recht, 2007) (see Appendix D) of dimension 512 for Gaussian kernels. We swept over the hyperparameters, including RBF scale, regularization weight for ridge regression, and regularization weight for the conditional independence regularization strength. All synthetic datasets are using the same causal structure as shown in Figure 1. Hyperparameters sweep is listed in Table 2 and it is the same for all test cases. Published as a conference paper at ICLR 2023 Algorithm 2 Estimation of CIRCE with random Fourier features Holdout data {(zi, yi)}M i=1, mini-batch {(xi, zi, yi)}B i=1 Holdout data Leave-one-out (Theorem C.1) for λ (ridge parameter) and σy (parameters of Y kernel): λ, σy = arg min PM i=1 ψ(zi) Kyi Y (KY Y +λI) 1KZ 2 Hz (1 (KY Y (KY Y +λ I) 1)ii) 2 W1 = (KY Y + λI) 1 , W2 = W1KZZW1 Every L mini-batches Sample D0 RFF R( ) W r 1 = R(Y ) W1R(Z), W r 2 = R(Z) W2R(Z) Mini-batch Use D random RFF out of D0 Compute R(y), R(z) (mini-batch) ˆKc = Kyy Kzz R(y)W r 1 R(z) R(y)W r 1 R(z) + R(y)W r 2 R(y) CIRCE = 1 B(B 1)Tr HKxx H ˆKc , H = I 1 Parameter Values CIRCE and HSCIC GCM conditional independence γ log space between [1, 104]; log space between [10 2, 10 0.5] ridge regression λ { 0.001, 0.01, 0.1, 1 } RBF scale { 0.001, 0.01, 0.1, 1 } Table 2: Hyperparameters for CIRCE, HSCIC and GCM on synthetic datasets. E.1 UNIVARIATE CASES Structural causal model for univariate case 1: Y, ϵZ N(0, 1) ϵA, ϵB N(0, 0.1) Z = Y 2 + ϵZ A = 0.5ZϵA + 2Y B = 0.5 exp ( AY ) sin(2AY ) + 5Z + 0.2ϵB Structural causal model for univariate case 2: Y, ϵZ N(0, 1) ϵA, ϵB N(0, 0.1) Z = Y 2 + ϵZ A = exp( 0.5Z2) sin 2Z + 2Y + 0.2ϵA B = sin(2AY ) exp( 0.5AY ) + 5Z + 0.2ϵB Published as a conference paper at ICLR 2023 E.2 MULTIVARIATE CASES Structural causal model for multivariate case 1: Y, ϵZi N(0, 1) ϵA, ϵB N(0, 0.1) Zi = Y 2 + ϵZi A = exp( 0.5Z1) + X i Zi sin(Y ) + 0.1ϵA B = exp( 0.5Z2)( X i Zi) + AY + 0.1ϵB Structural causal model for multivariate case 2: Yi, ϵZ N(0, 1) ϵA, ϵB N(0, 0.1) Z = Y T Y + ϵZ A = exp( 0.5Z) + sin X i Yi Z + 0.1ϵA B = exp( 0.5Z)Z + X i Yi + Z + AY1 + 0.1ϵB F IMAGE DATA DETAILS 0 100 101 102 103 regularization strength in-domain OOD trained on OOD 0 101 102 103 regularization strength in-domain OOD trained on OOD 0 10 2 10 1 100 regularization strength in-domain OOD trained on OOD Figure 7: d Sprites with nonlinear dependence. CIRCE used holdout data in training. Blue: indomain test loss; orange: out-of-domain loss (OOD); red: loss for OOD-trained encoder. Solid lines: median over 10 seeds; shaded areas: min/max values. For both d Spritres and Yale-B, we choose the following training hyperparameters over the validation set and without regularization: weight decay (1e-4, 1e-2), learning rate (1e-4, 1e-3, 1e-2) and length of training (200 or 500 epochs). These parameters are used for all runs (including the regularized ones). For d Sprites the batch size was 1024. For Yale-B the batch size was 256. The results for both standard (Corollary C.5) and centered (Corollary C.6) CIRCE estimators were similar for d Sprites (the reported one is standard), but the centered version was more stable for Yale-B (the reported one is centered). This is likely due to the bias arising from conditional expectation estimation. For d Sprites, the training set contained 589824 points, and the holdout set size was 5898 points. For Yale-B, the training set contained 11405 points, and the holdout set size was 1267 points. All kernels were Gaussian: k(x, x ) = exp( x x 2/(2σ2)). For Y , σ2 from [1.0, 0.1, 0.01, 0.001] and ridge regression parameter λ from [0.01, 0.1, 1.0, 10.0, 100.0]. The other two kernels had σ2 = 0.01 for linear and y-cone dependencies; for the nonlinear case, the kernel over Z had σ2 = 1 due to a different scaling of the distractor in that case. We additionally tested a setting in which the M holdout points used for conditional expectation estimation are not removed from the training data for CIRCE. As shown in Figure 7 for d Sprites with non-linear dependence, this has little effect on the performance.