# partial_disentanglement_for_domain_adaptation__0f61290b.pdf Partial Identifiability for Domain Adaptation Lingjing Kong 1 Shaoan Xie 1 Weiran Yao 1 Yujia Zheng 1 Guangyi Chen 2 1 Petar Stojanov 3 Victor Akinwande 1 Kun Zhang 2 1 Unsupervised domain adaptation is critical to many real-world applications where label information is unavailable in the target domain. In general, without further assumptions, the joint distribution of the features and the label is not identifiable in the target domain. To address this issue, we rely on a property of minimal changes of causal mechanisms across domains to minimize unnecessary influences of domain shift. To encode this property, we first formulate the data generating process using a latent variable model with two partitioned latent subspaces: invariant components whose distributions stay the same across domains, and sparse changing components that vary across domains. We further constrain the domain shift to have a restrictive influence on the changing components. Under mild conditions, we show that the latent variables are partially identifiable, from which it follows that the joint distribution of data and labels in the target domain is also identifiable. Given the theoretical insights, we propose a practical domain adaptation framework, called i MSDA. Extensive experimental results reveal that i MSDA outperforms state-of-the-art domain adaptation algorithms on benchmark datasets, demonstrating the effectiveness of our framework. 1. Introduction Unsupervised domain adaptation (UDA) is a form of prediction setting in which the labeled training data, and the unlabeled test data follow different distributions. UDA can frequently be done in settings where multiple labeled training datasets are available, and this is termed as multiple-source UDA. Formally, given features x, target variables y, and 1Carnegie Mellon University, USA 2Mohamed bin Zayed University of Artificial Intelligence, UAE 3Broad Institute of MIT and Harvard, USA. Correspondence to: Kun Zhang . Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s). domain indices u, the training (source domain) data follows multiple joint distributions px,y|u1, px,y|u2, ..., px,y|u M ,1 and the test (target domain) data follows the joint distribution px,y|u T , where px,y|u may vary across u1, u2, ..., u M. During training, for each i-th source domain, we are given labeled observations (x(i) k , y(i) k )mi k=1 from px,y|ui, and target domain unlabeled instances (x T k )m T k=1 from px,y|u T . The main goal of domain adaptation is to make use of the available observed information, to construct a predictor that will have optimal performance in the target domain. It is apparent that without further assumptions, this objective is ill-posed. Namely, since the only available observations in the target domain are from the marginal distribution px|u T , the data may correspond to infinitely many joint distributions px,y|u T . This mandates making additional assumptions on the relationship between the source and the target domain distributions, with the hope to be able to reconstruct (identify) the joint distribution in the target domain px,y|u T . Typically, these assumptions entail some measure of similarity across all of the joint distributions. One classical assumption is that each joint distribution shares the same conditional distribution py|x, whereas px|u changes, more widely known as the covariate shift setting (Shimodaira, 2000; Sugiyama et al., 2008; Zadrozny, 2004; Huang et al., 2007; Cortes et al., 2010). In addition, assumptions can be made that the source and the target domain distributions are related via the changes in py|u or one or more linear transformations of the features x (Zhang et al., 2013; 2015; Gong et al., 2016). In order to drop any parametric assumptions regarding the relationship between the domains, one may want to consider a more abstract assumption of minimal change. To make reasoning about this notion easier, it is often useful to consider the data-generating process (Sch olkopf et al., 2012; Zhang et al., 2013). For example, if the data generating process is Y X, then py|u and px|y,u change independently across domains, and factoring the joint distribution in terms of these factors makes it possible to represent its changes in the most parsimonious way. Furthermore the changes 1 With a slight abuse of notation, we use px,y|u to represent the conditional distribution for a specific domain u , i.e. px,y|u(x, y|u ). Partial Identifiability for Domain Adaptation of the distribution px|y,u can be represented as lying on a low-dimensional manifold (Stojanov et al., 2019). The generating process of the observed features and its changing parts across domains can also be automatically discovered from multiple-domain data (Zhang et al., 2017b; Huang et al., 2020), and the changes can be captured using conditional GANs (Zhang et al., 2020). With the advent of deep learning capabilities and their widespread use, another approach is to harness these deep architectures to learn a feature map to some latent variable z, such that pz|u1 = pz|u2 = ... = pz|u M = pz|u T (i.e. they are marginally invariant), while at the same time training a classifier fcls : Z Y on the labeled source domain data, which ensures that z has predictive information about y (Ganin & Lempitsky, 2014; Ben-David et al., 2010; Zhao et al., 2018). This could still mean that the joint distributions pz,y|u may be different across domains u1, ..., u M, leading to potentially poor prediction performance in the target domain. Solving the problem has also been attempted using deep generative models and enforcing disentanglement of the latent representation (Lu et al., 2020; Cai et al., 2019). However, none of these efforts establishes identifiability of px,y|u T . Since deep architectures are a requirement for strong performance on high-dimensional datasets, it has become essential to combine their representational power with the notion of minimal change of the joint distribution across domain u s. In fact, one can make reasonable assumptions about the role of the latent representation z in the data-generating process, and represent the minimal change of px,y|u across domains such that both z and px,y|u are identifiable in the target domain. In this paper, we represent the variables x, y and z in terms of a data-generating process assumption. To enforce minimal change of px,y,z|u across domains, we allow only a partition of the latent space to vary over domains and constrain the flexibility of the influence from domain changes. (1) Under this generating process, we show that both the changing and the invariant parts in the generating process are (partially) identifiable, which gives rise to a principled way of aligning source and target domains. (2) Leveraging the identifiability results, we propose an algorithm that identifies the latent representation and utilizes the representation for optimal prediction for the target domain. (3) We show that our algorithm surpasses state-of-the-art UDA algorithms on benchmark datasets. 2. Related Work Domain adaptation (Cai et al., 2019; Courty et al., 2017; Deng et al., 2019; Tzeng et al., 2017; Wang et al., 2019; Xu et al., 2019; Zhang et al., 2017a; 2013; 2018; 2019; Mao et al., 2021; Stojanov et al., 2021; Wu et al., 2022; Eastwood et al., 2022; Tong et al., 2022; Zhu et al., 2022; Xu et al., 2022; Berthelot et al., 2022; Kirchmeyer et al., 2022) is an actively studied area of research in machine learning and computer vision. One popular direction is by invariant representation learning, which is introduced by (Ganin & Lempitsky, 2014). Specifically, it has been successively refined in several different ways to ensure that the invariant representation z contains meaningful information about the target domain. For instance, in (Bousmalis et al., 2016), z was divided into a changing and an invariant part, and a constraint was added that z has to be able to reconstruct the original data, in addition to predicting y (from the shared part). Furthermore, pseudo-labels have also been utilized in order to match the joint distribution pz,y across domains, using kernel methods (Long et al., 2017a;b). Another approach to ensure more structure in z is to assume that the class-conditional distribution px|y has a clustering structure, and regularize the neural network algorithm to prevent decision boundaries from crossing high-density regions (Shu et al., 2018; Xie et al., 2018). In these studies, py was typically assumed to stay the same across domains. Dropping this assumption has been investigated in (Wu et al., 2019; Guo et al., 2020). Furthermore, the setting in which the label sets do not exactly match has been also been studied (Saito et al., 2020). While the body of work studying representation learning for domain adaptation is large and extensive, there are still no guarantees that the learned representation will help learn the optimal predictor in the target domain, even in the infinite-sample case. Besides, invariant representation learning also sheds light on adaptation with multiple sources (Xu et al., 2018; Peng et al., 2019; Wang et al., 2020; Li et al., 2021; Park & Lee, 2021). Along the line of work of generative models, studies on generative models and unsupervised learning have made headway into better understanding the properties of identifiability of a latent representation which is meant to capture some underlying factors of variation, from which the data was generated. Namely, independent component analysis (ICA) is the classical approach for learning a latent representation for which there are identifiability results, where the generating process is assumed to consist of a linear mixing function (Comon, 1994; Bell & Sejnowski, 1995; Hyv arinen et al., 2001). A major problem in nonlinear ICA is that, without assumption on either source distribution or generating process, the model is seriously unidentifiable (Hyv arinen & Pajunen, 1999). Recent breakthroughs introduced auxiliary variables, e.g., domain indexes, to advance the identifiability results (Hyvarinen et al., 2019; Sorrenson et al., 2020; H alv a & Hyvarinen, 2020; Lachapelle et al., 2021; Khemakhem et al., 2020a; von K ugelgen et al., 2021; Lu et al., 2020). These works aim to identify and disentangle the components of the latent representation, while assuming that all of them Partial Identifiability for Domain Adaptation Figure 1. The generating process: The gray shade of nodes indicates that the variable is observable. are changing across domains. In the context of self-supervised learning with data augmentation, von K ugelgen et al. (2021) considered the identifiability of shared part, which stays invariant between views, in a block-wise manner. However, this line of work assumes availability of paired instances in two domains. In the context of out-of-distribution generalization, Lu et al. (2020) extend the identifiability result of i VAE (Khemakhem et al., 2020a) to a general exponential family that is not necessarily factorized. However, this study does not take advantage of the fact that the latent representations should contain invariant information that can be disentangled from the part that corresponds to changes across domains. Most importantly, this study resorts to finding a conditionally invariant subpart of z, even though there may be parts of z that are not conditionally invariant, and yet still relevant for predicting y. In this paper, we make use of realistic assumptions regarding the data-generating process in order to provably identify the changing and the invariant aspects of the latent representation in both the source and target domains. In doing so, we drop any parametric assumptions about z and we allow both the changing and invariant parts to have predictive information about y. We show that we can learn align domains and make predictions in a principled manner, and we present an autoencoder algorithm to solve the problem in practice. 3. High-level Invariance for Domain Adaptation In this section, we introduce our data generating process in Figure 1 and Equation 1 and discuss how we could exploit this latent variable model to handle UDA. zc pzc, zs p zs, zs = fu( zs), x = g(zc, zs). (1) In the generating process, we assume that data x X (e.g. images) are generated by latent variables z Z Rn through an invertible and smooth mixing function g : Z X. We denote by u U the domain embedding, which is a constant vector for a specific domain. We partition latent variables z into two parts z = [zc, zs]: the invariant part zc Zc Rnc (i.e. content) of which the distribution stays constant over domain u s, and the changing part zs Zs Rns (i.e. style) with varying distributions over domains. We parameterize the influence of domain u on zs as a simple transformation of some generic form of the changing part, given by zs. Namely, given a component-wise monotonic function fu, we let zs = fu( zs). For example, in image datasets, zs can correspond to various kinds of background from the images (sand, trees, sky, etc.), and in this case zs can be interpreted as a generic background pattern that can easily be transformed into a domain-specific image background, depending on which function fu is used. Further, we assume that y is generated by invariant latent variables zc and zs. Thus, this generating process addresses the conditional-shift setting, in which px|y,u changes across domains, and py|u = py stays the same. We describe below the distinguishing features of our generating process, and illustrate how these features are essential to tackling UDA. Partitioned latent space As discussed in Section 1, the bulk of prior work (Ganin & Lempitsky, 2014; Ben-David et al., 2010; Zhao et al., 2018) focuses on learning an invariant representation over domains so that one classifier can be applied to novel domains in the latent space. Unlike these approaches, we will demonstrate that our parameterization of zs allows us to preserve the information of the changing part zs for prediction instead of discarding this part altogether by imposing invariance. Similarly, recent work (Lu et al., 2020) allows for the possibility that all latent variables could be influenced by domain changes in their framework to address domain shift, but discards a subset of them which may be relevant for predicting y. In contrast, our goal is to disentangle the changing and invariant parts zs and zc, capture the relationship between zs and y across domains by learning fu, and use this information to perform prediction in the target domain. Our parameterization also allows us to implement the minimal change principle by constraining the changing components to be as few as possible. Otherwise, unnecessarily large domain influences (e.g. all components being changing) may lead to loss of semantic information in the invariant (zc, zs) space. For instance, in a scenario where each domain comprises digits 0-9 with a specific rotation degree, unconstrained influence may result in a mapping between 1 and 9 across domains. Partial Identifiability for Domain Adaptation High-level Invariance zs We note that presence of zs is significant, as it allows us to provably learn an optimal classifier over domains without requiring that pz|u to be invariant over domains as in previous work (Ganin & Lempitsky, 2014; Ben-David et al., 2010; Zhao et al., 2018). With the component-wise monotonic function fu, we are able to identify zs through zs which is critical to our ability to learn how the changes across domains are related to each other. If we know fu and zs, we can always map all domains to this high-level zs, which will have the desired information to predict y (in addition to zc). Additionally, this restrictive function form also contributes to regulating unnecessary changes. The problem remains whether it is possible to identify the latent variables of interest (i.e. zc, zs) with only observational data (x, u). Because if this is the case, we can leverage labeled data in source domains to learn a classifier according to py| zs,zc that is universally applicable over given domains. In Section 4, we show that we can indeed achieve this identifiability for this generating process, and describe how such a model can be learned. In Section 5, we present the corresponding algorithmic implementation based on Variational Auto-Encoder (VAE) (Kingma & Welling, 2014). 4. Partial Identifiability of the Latent Variables In this section, we show our theoretical results that serve as the foundation of our algorithm. In addition to guiding algorithmic design, we highlight that our theory is a novel contribution to the deep generative modeling literature. In Section 4.1 and Section 5, we show that with unlabeled observational data (x, u) over domains we can partially identify the latent space. In Section 4.3, we discuss how these identifiability properties, together with labeled source domain data, give rise to a classifier useful to the target domain. We parameterize zs = fu( zs) with a component-wise monotonic function for each domain u, and learn a model (ˆg, pˆzc, pˆzs|u) 2 that assumes the identical process as in Equation 1 and matches the true marginal data distribution in all domains, that is, px|u(x |u ) = pˆx|u(x |u ), x X, u U, (2) where the random variable x is generated from the true process (g, pzc, pzs|u) and ˆx is from the estimated model (ˆg, pˆzc, pˆzs|u). In the following, we show that our estimated model can recover the true changing components and the invariant subspace zc, and then discuss the implications of our theory for UDA. 2 We useˆ to distinguish estimators from the ground truth. 4.1. Identifiability of Changing Components First, we show that the changing components can be identified up to permutation and invertible component-wise transformations. More formally, for each true changing component zs,i, there exists a corresponding estimated changing component ˆzs,j and an invertible function hs,i : R R, such that ˆzs,j = hs,i(zs,i). For ease of exposition, we assume that the zc and zs correspond to components in z with indices {1, . . . , nc} and {nc + 1, . . . , n} respectively, that is, zc = (zi)nc i=1 and zs = (zi)n i=nc+1. Theorem 4.1. We follow the data generation process in Equation 1 and make the following assumptions: A1 (Smooth and Positive Density): The probability density function of latent variables is smooth and positive, i.e. pz|u is smooth and pz|u > 0 over Z and U. A2 (Conditional independence): Conditioned on u, each zi is independent of any other zj for i, j [n], i = j, i.e. log pz|u(z|u) = Pn i qi(zi, u) where qi is the log density of the conditional distribution, i.e., qi := log pzi|u. A3 (Linear independence): For any zs Zs Rns, there exist 2ns + 1 values of u, i.e., uj with j = 0, 1, . . . , 2ns, such that the 2ns vectors w(zs, uj) w(zs, u0) with j = 1, ..., 2ns, are linearly independent, where vector w(zs, u) is defined as follows: w(zs, u) = qnc+1 (znc+1, u) znc+1 , . . . , qn (zn, u) 2qnc+1 (znc+1, u) z2 nc+1 , . . . , 2qn (zn, u) By learning (ˆg, pˆzc, pˆzs|u) to achieve Equation 2, zs is component-wise identifiable. A proof can be found in Appendix A.1. We note that all three assumptions are standard in the nonlinear ICA literature (Hyvarinen et al., 2019; Khemakhem et al., 2020a). In particular, the intuition of Assumption A3 is that pzs|u varies sufficiently over domain u s. We demonstrate in Section 6 that this identifiability can be largely achieved with Gaussian distributions of distinct means and variances. It is noteworthy that Theorem 4.1 manifests the benefits of minimal changes - the smaller ns, the more easily for the linear independence condition to hold, as we need only 2ns + 1 domains with sufficient variability, whereas prior work often require 2n + 1 sufficiently variable domains. We note that in many DA problems, the relations between domains are explicit, so the dimensionality of the intrinsic change is small. Our results provide insights into whether a fixed number of domains is sufficient for DA with different levels of change. Partial Identifiability for Domain Adaptation The identifiability of the changing part is crucial to our method. Only by recovering the changing components zs and the corresponding domain influence fu, can we correctly align various domains with zs and learn a classifier there. 4.2. Identifiability of the Invariant Subspace Building on the identifiability of changing components, we show that the invariant subspace is identifiable in a blockwise fashion. Our goal is to show that the estimated invariant part has preserved information of the true invariant part, and there is no information mixed in from the changing components. Formally, we would like to show that there exists an invertible mapping h c : Zc Zc between the estimated invariant part ˆzc and the true invariant part zc, s.t. ˆzc = h c(zc). The idea of block-wise identifiability emerges in independent subspace analysis (Casey & Westner, 2000; Hyv arinen & Hoyer, 2000; Le et al., 2011; Theis, 2006), and recent work (von K ugelgen et al., 2021) addresses it in the nonlinear ICA scenario. In the following theorem, we show that the block-wise identifiability of the invariant subspace can be attained with the estimated data generating process. Note that the block-wise identifiability is a weaker notion of identifiability than the component-wise identifiability achieved for the changing components. Thus, we only claim partial identifiability for the invariant latent variables. Theorem 4.2. We follow the data generation process in Equation 1 and make Assumption A1-A3. In addition, we make the following assumption: A4 (Domain Variability): For any set Az Z with the following two properties: a) Az has nonzero probability measure, i.e. P [{z Az}|{u = u }] > 0 for any u U. b) Az cannot be expressed as Bzc Zs for any Bzc Zc. u1, u2 U, such that Z z Az pz|u(z|u1)dz = Z z Az pz|u(z|u2)dz. With the model (ˆg, pˆzc, pˆzs|u) estimated by observing Equation 2, the zc is block-wise identifiable. A proof is deferred to Appendix A.2. Like Assumption A3, Assumption A4 also requires the distribution pz|u to change sufficiently over domains. Intuitively, the chance of having one such subset Az on which all domain distributions have an equal probability measure is very slim when the number of domains is large and the domain distributions differ sufficiently. In Section 6, we verify Theorem 4.2 with Gaussian distributions with varying means and variances. We note that in comparison to the identifiability theory in recent work (von K ugelgen et al., 2021), our setting is more challenging in a sense that the theory in (von K ugelgen et al., 2021) relies on the pairing of augmented data, whereas ours does not - we only assume the invariant marginal distribution of pzc does not vary over domains. 4.3. Identifiability of the Joint Distribution in the Shared Space Equipped with the identifiability of the changing components and the invariant subspace, we now discuss how these can result in the identifiability of the joint distribution px,y|u T for the target domain u T with a classifier trained on source domain labels. Since g is partially invertible and partially identifiable (Theorem 4.1 and Theorem 4.2), we can recover zc, zs from observation x in any domain, that is, we can partially identify the joint distribution px,zc,zs|u. As zs = fu( zs) and fu is constrained to be component-wise monotonic and invertible, we can further identify px,zc, zs|u. Also note that the label y is conditionally independent of x and especially u given zc and zs as shown in Figure 1, which means that zc and zs capture all the information in x that is relevant to y. Therefore, given zc, zs we can make prediction in the target domain with the predictor py|zc, zs, and this predictor can be learned on source domains, as px,zc,zs|u is identifiable as discussed above. This is exactly the goal of UDA. In fact, we can identify the joint distribution px,y|u T - with the identified px,zc, zs|u T for the target domain, we can derive px,y|u T = R zs px,zc, zs|u T py|zc, zsdzcd zs. The intuition is that by resorting to the high-level invariant part zs we are able to account for the influence of the domains, and thus obtain a classifier useful to all domains participating in the estimator learning, which includes the target domain in the UDA setup. This reasoning motivates our algorithm design presented in Section 5. 5. Partially-Identifiable VAE for Domain Adaptation In this section, we discuss how to turn the theoretical insights in Section 4 into a practical algorithm, i MSDA (identifiable Multi-Source Domain Adaptation), by leveraging expressive deep learning models. As instructed by the theory, we can estimate the underlying causal generating process (Figure 1) and recover the ground-truth latent variables up to certain subspaces. This in turn allows for optimal classification across domains. In the following, we describe Partial Identifiability for Domain Adaptation Figure 2. Diagram of our proposed method, i MSDA. We first apply the VAE encoder (fµ, fΣ) to encode x into (ˆzc, ˆzs), which is further fed into the decoder ˆg for reconstruction. In parallel, the changing part ˆzs is passed through the flow model fu to recover the high-level invariant variable ˆ zs. We use (ˆzc, ˆ zs) for classification with the classifier fcls and for matching N(0, I) with a KL loss. how to estimate each component in the casual generating process (g, pzs|u, pzc, py|zc, zs) under the VAE framework. The architecture diagram is shown in Figure 2. We use the VAE encoder to obtain the posterior q(ˆz|x) which essentially estimates the inverse of the mixing function g 1. Specifically, we parameterize the mean and the covariance matrix diagonal of posterior qfµ,fΣ(ˆz|x) with multilayer perceptron s (MLP) fµ and fΣ respectively (Equation 4). The VAE decoder ˆg, which is also a MLP, processes the estimated latent variable ˆz and estimates the input data x (Equation 5). ˆz qfµ,fΣ(ˆz|x) := N(fµ(x), fΣ(x)), (4) ˆx = ˆg(ˆz). (5) As indicated in Figure 1, the changing part zs of the latent variable z contains the information of a specific domain u, that is, zs = fu( zs). To enforce the minimal change property, we assume in Section 4 that domain influence is component-wise monotonic. We estimate the domain influence function fu by flow-based architectures (Dinh et al., 2017; Huang et al., 2018; Durkan et al., 2019) for each domain u, i.e., ˆzs = ˆfu(ˆ zs). Benefiting from the invertibility of ˆfu, we can obtain ˆ zs from the estimated ˆzs sampled from Equation 4 by inverting the estimated function ˆf 1 u : ˆ zs = ˆf 1 u (ˆzs). (6) Overall, the VAE loss can be employed to enforce the above described generating process: LVAE(fµ, fΣ, ˆfu, ˆg) = E(x,u) E z qfµ,fΣ 1 2 x ˆx 2 + β D(qfµ,fΣ, ˆ fu(ˆ z|x)||p( z)), where β is a control factor introduced in β-VAE (Higgins et al., 2016). We use qfµ,fΣ, ˆ fu(ˆ z|x) to denote the posterior of z which can be obtained by executing Equation 4 and Equation 6. We choose p( z) as a standard Gaussian distribution N(0, I), following the independence assumption (Assumption 4.1). The KL divergence in Equation 7 is tractable, thanks to the Gaussian form of qfµ,fΣ (Equation 4) and the tractability of the Jocobian determinant of the flow model ˆfu. Lastly, we enforce the causal relation between y and (zc, zs) in our estimated generating process (Figure 1). This can be implemented by a classification branch: ˆy = fcls(ˆzc, ˆ zs), (8) where fcls is parameterized by a MLP. In sources domains, we directly optimize a cross-entropy loss: Lcls(fcls, fµ, fΣ, ˆfu, ˆg) = Eˆzc,ˆzs, y h y log (ˆy) i , (9) where y are one-hot ground-truth label vectors available in the source domains. In the target domain, we enforce this relation by maximizing the mutual information I((ˆzc, ˆ zs); ˆy) which amounts to minimizing the conditional entropy H(ˆy|ˆzc, ˆ zs) as in (Wang et al., 2020; Li et al., 2021). Doing so facilitates learning a latent representation space adhering to the generating process. Lent(fcls, fµ, fΣ, ˆfu, ˆg) = Eˆzc,ˆzs h ˆy log (ˆy) i . (10) Algorithm 1 Training i MSDA Require: (x, y, u) from source domains and (x, u) from the target domain. Require: fµ, fΣ, ˆg, ˆfu, and fcls. 1. get the latent representation ˆz qfµ,fΣ(ˆz|x) in Equation 4; 2. reconstruct the observation ˆx = ˆg(ˆz) in Equation 5; 3. recover the high-level invariant part ˆ zs = ˆf 1 u (ˆz) in Equation 6; 4. estimate the class label ˆy = fcls(ˆzc, ˆ zs) in Equation 8; 5. compute and minimize L according to Equation 7, Equation 10, Equation 9, and Equation 11. Partial Identifiability for Domain Adaptation Overall, our loss function is L(fcls, ˆfµ, fΣ, fu, ˆg) = Lcls + α1Lent + α2LVAE, (11) where α1, α2 are hyper-parameters that balance the loss terms. The complete procedures are shown in Algorithm 1. 6. Experiments on Synthetic Data In this section, we present synthetic data experiments to verify Theorem 4.1 and Theorem 4.2 in practice. 6.1. Experimental Setup Data generation We generate synthetic data following the generating process of x in Equation 1. We work with latent variables z of 4 dimensions with nc = ns = 2. We sample zc N(0, I) and zs N(µu, σ2 u I) where for each domain u we sample µu Unif( 4, 4) and σ2 u Unif(0.01, 1). We choose g to be a 2-layer MLP with the Leaky-Re LU activation function 3. Evaluation metrics To measure the component-wise identifiability of the changing components, we compute Mean Correlation Coefficient (MCC) between zs and ˆzs on a test dataset. MCC is a standard metric in the ICA literature. A higher MCC indicates a higher extent of identifiability and MCC reaches 1 when latent variables are perfectly component-wise identifiable. To measure the block-wise identifiability of the invariant subspace, we follow the experimental practice of the work (von K ugelgen et al., 2021) and compute the R2 coefficient of determination between zc and ˆzc, where R2 = 1 suggests there is a one-to-one mapping between the two. As i VAE and β-VAE do not specify the invariant and the changing subspace in their estimation. At each of their evaluation, we test all possible partitions of their estimated latent variables and report the one with the highest overall score (Avg.). We repeat each experiment over 3 random seeds. 6.2. Results and Discussion From Table 1, we can observe that both R2 and MCC of our method grow along with the number of domains with the peak at d = 9 where both the invariant and the changing part attain high identifiability scores. This confirms our theory and the intuition that a certain number of domains are necessary for identifiability. Notably, we can observe that decent identifiability can be attained with a relatively small number of domains (e.g. 5), which sheds light on why i MSDAwould work even with only a moderate number of domains. We visualize the the disentanglement in Figure 3. 3 Our experimental design closely follows the practice employed in prior nonlinear ICA work (Hyvarinen & Morioka, 2016; Hyvarinen et al., 2019). Figure 3. The scatter plot for the true and the estimated components from our method trained with 9 domains. The y (resp. x) axe of each subplot corresponds to a specific estimated (resp. true) latent variable. We can observe that the changing components can be identified in a component-wise manner (subplots Estimated S and True S ) which verifies our Theorem 4.1. The true invariant components can be (partial) identified within its own subspace (subplots Estimated C and True C ) while influencing the changing components minimally, adhering to Theorem 4.2. ours (d = 3) ours (d = 5) ours (d = 7) ours (d = 9) MCC 0.67 0.12 0.80 0.15 0.90 0.12 0.91 0.09 R2 0.59 0.05 0.74 0.09 0.84 0.05 0.85 0.06 Avg. 0.63 0.08 0.77 0.12 0.87 0.09 0.88 0.08 Table 1. Identifiability on synthetic data: d denotes the number of domains and Avg. stands for the average of MCC and R2. 7. Experiments on Real-world Data 7.1. Experimental Setup Datasets We evaluate our proposed i MSDA on two benchmark domain adaptation datasets, namely Office-Home (Venkateswara et al., 2017) and PACS (Li et al., 2017). Please see Appendix A.4 for detailed description. In experiments, we designate one domain as the target domain and use all other domain as source domains. Baselines We compare our approach with state-of-the-art methods to verify its effectiveness. We compare with Source Only and single-source domain adaptation methods: DANN (Ganin et al., 2016), MCD (Saito et al., 2018), DANN+BSP (Chen et al., 2019). We also compare our method with existing multi-source domain adaptation methods, including Partial Identifiability for Domain Adaptation Methods Art Cartoon Photo Sketch Avg Source Only (He et al., 2016) 74.9 0.88 72.1 0.75 94.5 0.58 64.7 1.53 76.6 DANN (Ganin et al., 2016) 81.9 1.13 77.5 1.26 91.8 1.21 74.6 1.03 81.5 MDAN (Zhao et al., 2018) 79.1 0.36 76.0 0.73 91.4 0.85 72.0 0.80 79.6 WBN (Mancini et al., 2018) 89.9 0.28 89.7 0.56 97.4 0.84 58.0 1.51 83.8 MCD (Saito et al., 2018) 88.7 1.01 88.9 1.53 96.4 0.42 73.9 3.94 87.0 M3SDA (Peng et al., 2019) 89.3 0.42 89.9 1.00 97.3 0.31 76.7 2.86 88.3 CMSS (Yang et al., 2020) 88.6 0.36 90.4 0.80 96.9 0.27 82.0 0.59 89.5 Lt C-MSDA (Wang et al., 2020) 90.19 90.47 97.23 81.53 89.8 T-SVDNet (Li et al., 2021) 90.43 90.61 98.50 85.49 91.25 i MSDA (Ours) 93.75 0.32 92.46 0.23 98.48 0.07 89.22 0.73 93.48 Table 2. Classification results on PACS. We employ Resnet-18 as our encoder backbone. Most baseline results are taken from (Yang et al., 2020). We choose α1 = 0.1 and α2 = 5e 5. The latent space is partitioned with ns = 4 and n = 64. Models Art Clipart Product Realworld Avg Source Only (He et al., 2016) 64.58 0.68 52.32 0.63 77.63 0.23 80.70 0.81 68.81 DANN (Ganin et al., 2016) 64.26 0.59 58.01 1.55 76.44 0.47 78.80 0.49 69.38 DANN+BSP (Chen et al., 2019) 66.10 0.27 61.03 0.39 78.13 0.31 79.92 0.13 71.29 DAN (Long et al., 2015) 68.28 0.45 57.92 0.65 78.45 0.05 81.93 0.35 71.64 MCD (Saito et al., 2018) 67.84 0.38 59.91 0.55 79.21 0.61 80.93 0.18 71.97 M3SDA (Peng et al., 2019) 66.22 0.52 58.55 0.62 79.45 0.52 81.35 0.19 71.39 DCTN (Xu et al., 2018) 66.92 0.60 61.82 0.46 79.20 0.58 77.78 0.59 71.43 MIAN-γ (Park & Lee, 2021) 69.88 0.35 64.20 0.68 80.87 0.37 81.49 0.24 74.11 i MSDA (Ours) 75.4 0.86 61.4 0.73 83.5 0.22 84.47 0.38 76.19 Table 3. Classification results on Office-Home. We employ Resnet-50 as our encoder backbone. Baseline results are taken from (Park & Lee, 2021). We choose α1 = 0.1 and α2 = 1e 4. The latent space is partitioned with ns = 4 and n = 128. Lcls Lcls + α1Lent L P 97.2 0.01 98.4 0.26 98.48 0.07 A 81.1 0.73 93.44 0.31 93.75 0.32 C 79.2 0.17 89.03 2.4 92.46 0.23 S 70.9 0.69 87.29 0.8 89.22 0.73 Avg. 82.11 92.01 93.48 Table 4. Ablation study on PACS dataset. We study the impact of individual loss terms. Except for the ablated terms, the hyperparameters are identical to those in Table 2. Lcls Lcls + α1Lent L Ar 64.57 0.04 74.23 0.66 75.4 0.86 Pr 77.77 0.03 82.83 0.12 83.5 0.22 Cl 47.17 0.20 61.8 0.1 61.4 0.73 Rw 79.03 0.08 84.2 0.14 84.47 0.38 Avg. 67.2 75.77 76.19 Table 5. Ablation study on Office-Home dataset. We study the impact of individual loss terms. Except for the ablated terms, the hyper-parameters are identical to those in Table 3. ns = 8 ns = 4 ns = 2 P 98.40 0.1 98.48 0.07 98.34 0.07 A 92.15 1.74 93.75 0.32 93.86 0.46 C 91.48 1.54 92.46 0.23 92.18 0.34 S 86.15 5.2 89.22 0.73 86.25 3.05 Avg 92.05 2.14 93.48 0.34 92.66 0.98 Table 6. The impact of the changing dimension ns: we test several values of ns on PACS dataset. The other configurations are identical to those in Table 2. M3SDA (Peng et al., 2019), CMSS (Yang et al., 2020) Lt C-MSDA (Wang et al., 2020), MIAN-γ (Park & Lee, 2021) and T-SVDNet (Li et al., 2021). Note that when using single-source adaptation methods, we pool all sources together as a single source and then apply the methods. We defer implementation details to Appendix A.4. 7.2. Results and Discussion PACS The results for PACS are presented in Table 2. We can observe that for the majority of the transfer directions, i MSDA outperforms the most competitive baseline by a considerable margin of 1.2% - 3%. For the Phone task where it does not, the performance is within margin of error com- Partial Identifiability for Domain Adaptation pared to the strongest algorithm T-SVDNet. Notably, when compared with currently proposed T-SVDNet (Li et al., 2021), i MSDA achieves a significant performance gain on the challenging task Sketch. To aid understanding, we visualize the learned features in Figure 4 (Appendix A.4), which shows that i MSDA can effective align source and target domains while retaining discriminative structures. Office-Home The results in Table 3 demonstrate that the superior performance of i MSDA on the Office-Home dataset. i MSDA achieves a margin of roughly 3% over other baselines on 3 out of 4 tasks, with an exception for Clipart. Although i MSDA loses out to MIAN-γ for the Clipart task, it is still on par with or superior to other baselines. 7.3. Ablation studies The impact of individual loss term. Table 4 and Table 5 contain the ablation results for individual loss terms. We can observe that Lent greatly boosts the model performance in both cases (around 10% and 8.5%). This shows the directly enforcing the relation between (ˆzc, ˆ zs) and y is an effective solution, if the change is insignificant over domains. The inclusion of the alignment consistently benefits the model performance in each case, which justifies our theoretical insights and design choice. The impact of the changing part dimension. Table 6 includes results on various numbers of changing components ns. We can observe the model performance degrades at both large ns (i.e. 8) and small ns (i.e. 2) choices. This is coherent with our hypothesis: a large ns compromises the minimal change principle, leading to the undesirable consequences as discussed in Section 3; on the contrary, an overly small ns could prevent the model from handling the domain change, for lack of changing capacity, which also yields suboptimal performance. 8. Conclusion It is not uncommon to assume observations of the real-world are generated from high-level latent variables and thus the ill-posedness in the problem of UDA can be reduced to obtaining meaningful reconstructions of the those latent variables and mapping distinct domains to a shared space for classification. In this work, we show that under reasonable assumptions on the data generating process, as well as leveraging the principle of minimality, we can obtain partial identifiability of the changing and invariant parts of the generating process. In particular, by introducing an high-level invariant latent variable that influences the changing variable and the corresponding label across domains, we show identifiabil- ity of the joint distribution px,y|u T for the target domain u T with a classifier trained on source domain labels. Our proposed VAE combined with a flow model architecture learns disentangled representations that allows us perform multi-source UDA with state-of-the-art results across various benchmarks. Acknowledgment We thank the anonymous reviewers for their constructive comments. This work was supported in part by the NSF-Convergence Accelerator Track-D award #2134901 and by the National Institutes of Health (NIH) under Contract R01HL159805, and by a grant from Apple. The NSF or NIH is not responsible for the views reported in this paper. Bell, A. J. and Sejnowski, T. J. An informationmaximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129 1159, 1995. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., and Vaughan, J. W. A theory of learning from different domains. Machine learning, 79(1-2):151 175, 2010. Berthelot, D., Roelofs, R., Sohn, K., Carlini, N., and Kurakin, A. Adamatch: A unified approach to semisupervised learning and domain adaptation. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum? id=Q5uh1Nvv5dm. Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., and Erhan, D. Domain separation networks. In Advances in Neural Information Processing Systems, pp. 343 351, 2016. Cai, R., Li, Z., Wei, P., Qiao, J., Zhang, K., and Hao, Z. Learning disentangled semantic representation for domain adaptation. In IJCAI: proceedings of the conference, volume 2019, pp. 2060. NIH Public Access, 2019. Casey, M. A. and Westner, A. Separation of mixed audio sources by independent subspace analysis. In ICMC, pp. 154 161, 2000. Chen, X., Wang, S., Long, M., and Wang, J. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In International conference on machine learning, pp. 1081 1090. PMLR, 2019. Comon, P. Independent component analysis, a new concept? Signal processing, 36(3):287 314, 1994. Partial Identifiability for Domain Adaptation Cortes, C., Mansour, Y., and Mohri, M. Learning bounds for importance weighting. In NIPS 23, 2010. Courty, N., Flamary, R., Habrard, A., and Rakotomamonjy, A. Joint distribution optimal transportation for domain adaptation. In NIPS, 2017. Deng, Z., Luo, Y., and Zhu, J. Cluster alignment with a teacher for unsupervised domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9944 9953, 2019. Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp, 2017. Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G. Neural spline flows, 2019. Eastwood, C., Mason, I., Williams, C., and Scholkopf, B. Source-free adaptation to measurement shift via bottom-up feature restoration. In International Conference on Learning Representations, 2022. URL https: //openreview.net/forum?id=1JDi K_Tb V4S. Ganin, Y. and Lempitsky, V. Unsupervised domain adaptation by backpropagation. ar Xiv preprint ar Xiv:1409.7495, 2014. Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., and Lempitsky, V. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1):2096 2030, 2016. Gong, M., Zhang, K., Liu, T., Tao, D., Glymour, C., and Sch olkopf, B. Domain adaptation with conditional transferable components. In International conference on machine learning, pp. 2839 2848. PMLR, 2016. Guo, J., Gong, M., Liu, T., Zhang, K., and Tao, D. Ltf: A label transformation framework for correcting label shift. In International Conference on Machine Learning, pp. 3843 3853. PMLR, 2020. H alv a, H. and Hyvarinen, A. Hidden markov nonlinear ica: Unsupervised learning from nonstationary time series. In Conference on Uncertainty in Artificial Intelligence, pp. 939 948. PMLR, 2020. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770 778, 2016. Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. betavae: Learning basic visual concepts with a constrained variational framework. 2016. Huang, B., Zhang, K., Zhang, J., Ramsey, J., Sanchez Romero, R., Glymour, C., and Sch olkopf, B. Causal discovery from heterogeneous/nonstationary data. Journal of Machine Learning Research, 2020. Huang, C.-W., Krueger, D., Lacoste, A., and Courville, A. Neural autoregressive flows, 2018. Huang, J., Smola, A., Gretton, A., Borgwardt, K., and Sch olkopf, B. Correcting sample selection bias by unlabeled data. In NIPS 19, pp. 601 608, 2007. Hyv arinen, A. and Hoyer, P. Emergence of phase-and shiftinvariant features by decomposition of natural images into independent feature subspaces. Neural computation, 12(7):1705 1720, 2000. Hyvarinen, A. and Morioka, H. Unsupervised feature extraction by time-contrastive learning and nonlinear ica, 2016. Hyv arinen, A. and Pajunen, P. Nonlinear independent component analysis: Existence and uniqueness results. Neural networks, 12(3):429 439, 1999. Hyv arinen, A., Karhunen, J., and Oja, E. Independent Component Analysis. John Wiley & Sons, Inc, 2001. Hyvarinen, A., Sasaki, H., and Turner, R. Nonlinear ica using auxiliary variables and generalized contrastive learning. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 859 868. PMLR, 2019. Khemakhem, I., Kingma, D., Monti, R., and Hyvarinen, A. Variational autoencoders and nonlinear ica: A unifying framework. In International Conference on Artificial Intelligence and Statistics, pp. 2207 2217. PMLR, 2020a. Khemakhem, I., Monti, R., Kingma, D., and Hyvarinen, A. Ice-beem: Identifiable conditional energy-based deep models based on nonlinear ica. Advances in Neural Information Processing Systems, 33:12768 12778, 2020b. Kingma, D. P. and Welling, M. Auto-encoding variational bayes, 2014. Kirchmeyer, M., Rakotomamonjy, A., de Bezenac, E., and patrick gallinari. Mapping conditional distributions for domain adaptation under generalized target shift. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum? id=s Pf B2PI87BZ. Klindt, D., Schott, L., Sharma, Y., Ustyuzhaninov, I., Brendel, W., Bethge, M., and Paiton, D. Towards nonlinear disentanglement in natural data with temporal sparse coding. ar Xiv preprint ar Xiv:2007.10930, 2020. Partial Identifiability for Domain Adaptation Lachapelle, S., L opez, P. R., Sharma, Y., Everett, K., Priol, R. L., Lacoste, A., and Lacoste-Julien, S. Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ica. ar Xiv preprint ar Xiv:2107.10098, 2021. Le, Q. V., Zou, W. Y., Yeung, S. Y., and Ng, A. Y. Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In CVPR 2011, pp. 3361 3368. IEEE, 2011. Li, D., Yang, Y., Song, Y.-Z., and Hospedales, T. M. Deeper, broader and artier domain generalization. In Proceedings of the IEEE international conference on computer vision, pp. 5542 5550, 2017. Li, R., Jia, X., He, J., Chen, S., and Hu, Q. T-svdnet: Exploring high-order prototypical correlations for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9991 10000, 2021. Long, M., Cao, Y., Wang, J., and Jordan, M. Learning transferable features with deep adaptation networks. In International conference on machine learning, pp. 97 105. PMLR, 2015. Long, M., Cao, Z., Wang, J., and Jordan, M. I. Conditional adversarial domain adaptation. ar Xiv preprint ar Xiv:1705.10667, 2017a. Long, M., Zhu, H., Wang, J., and Jordan, M. I. Deep transfer learning with joint adaptation networks. In Proceedings of the 34th International Conference on Machine Learning Volume 70, pp. 2208 2217. JMLR. org, 2017b. Lu, C., Wu, Y., Hern andez-Lobato, J. M., and Sch olkopf, B. Invariant causal representation learning. 2020. Mancini, M., Porzi, L., Bulo, S. R., Caputo, B., and Ricci, E. Boosting domain adaptation by discovering latent domains. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3771 3780, 2018. Mao, H., Du, L., Zheng, Y., Fu, Q., Li, Z., Chen, X., Shi, H., and Zhang, D. Source free unsupervised graph domain adaptation. ar Xiv preprint ar Xiv:2112.00955, 2021. Park, G. Y. and Lee, S. W. Information-theoretic regularization for multi-source domain adaptation. ar Xiv preprint ar Xiv:2104.01568, 2021. Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., and Wang, B. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1406 1415, 2019. Saito, K., Watanabe, K., Ushiku, Y., and Harada, T. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3723 3732, 2018. Saito, K., Kim, D., Sclaroff, S., and Saenko, K. Universal domain adaptation through self supervision. ar Xiv preprint ar Xiv:2002.07953, 2020. Sch olkopf, B., Janzing, D., Peters, J., Sgouritsa, E., Zhang, K., and Mooij, J. On causal and anticausal learning. In ICML-12, Edinburgh, Scotland, 2012. Shimodaira, H. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90:227 244, 2000. Shu, R., Bui, H. H., Narui, H., and Ermon, S. A dirt-t approach to unsupervised domain adaptation. In Proc. 6th International Conference on Learning Representations, 2018. Sorrenson, P., Rother, C., and K othe, U. Disentanglement by nonlinear ica with general incompressible-flow networks (gin). ar Xiv preprint ar Xiv:2001.04872, 2020. Stojanov, P., Gong, M., Carbonell, J. G., and Zhang, K. Data-driven approach to multiple-source domain adaptation. Proceedings of machine learning research, 89:3487, 2019. Stojanov, P., Li, Z., Gong, M., Cai, R., Carbonell, J., and Zhang, K. Domain adaptation with invariant representation learning: What transformations to learn? Advances in Neural Information Processing Systems, 34, 2021. Sugiyama, M., Suzuki, T., Nakajima, S., Kashima, H., von B unau, P., and Kawanabe, M. Direct importance estimation for covariate shift adaptation. Annals of the Institute of Statistical Mathematics, 60:699 746, 2008. Theis, F. Towards a general independent subspace analysis. Advances in Neural Information Processing Systems, 19: 1361 1368, 2006. Tong, S., Garipov, T., Zhang, Y., Chang, S., and Jaakkola, T. S. Adversarial support alignment. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum? id=26g Kg6x-ie. Tzeng, E., Hoffman, J., Saenko, K., and Darrell, T. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7167 7176, 2017. Partial Identifiability for Domain Adaptation Venkateswara, H., Eusebio, J., Chakraborty, S., and Panchanathan, S. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5018 5027, 2017. von K ugelgen, J., Sharma, Y., Gresele, L., Brendel, W., Sch olkopf, B., Besserve, M., and Locatello, F. Self-supervised learning with data augmentations provably isolates content from style. ar Xiv preprint ar Xiv:2106.04619, 2021. Wang, H., Xu, M., Ni, B., and Zhang, W. Learning to combine: Knowledge aggregation for multi-source domain adaptation. In European Conference on Computer Vision, pp. 727 744. Springer, 2020. Wang, J., Chen, Y., Yu, H., Huang, M., and Yang, Q. Easy transfer learning by exploiting intra-domain structures. In 2019 IEEE International Conference on Multimedia and Expo (ICME), pp. 1210 1215. IEEE, 2019. Wu, Y., Winston, E., Kaushik, D., and Lipton, Z. Domain adaptation with asymmetrically-relaxed distribution alignment. In International Conference on Machine Learning, pp. 6872 6881. PMLR, 2019. Wu, Z., Nitzan, Y., Shechtman, E., and Lischinski, D. Stylealign: Analysis and applications of aligned style GAN models. In International Conference on Learning Representations, 2022. URL https://openreview. net/forum?id=Qg2vi4Zb HM9. Xie, S., Zheng, Z., Chen, L., and Chen, C. Learning semantic representations for unsupervised domain adaptation. In International conference on machine learning, pp. 5423 5432. PMLR, 2018. Xu, R., Chen, Z., Zuo, W., Yan, J., and Lin, L. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3964 3973, 2018. Xu, R., Li, G., Yang, J., and Lin, L. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1426 1435, 2019. Xu, T., Chen, W., WANG, P., Wang, F., Li, H., and Jin, R. CDTrans: Cross-domain transformer for unsupervised domain adaptation. In International Conference on Learning Representations, 2022. URL https: //openreview.net/forum?id=XGzk5OKWFFc. Yang, L., Balaji, Y., Lim, S.-N., and Shrivastava, A. Curriculum manager for source selection in multi-source domain adaptation. In Computer Vision ECCV 2020: 16th European Conference, Glasgow, UK, August 23 28, 2020, Proceedings, Part XIV 16, pp. 608 624. Springer, 2020. Zadrozny, B. Learning and evaluating classifiers under sample selection bias. In ICML-04, pp. 114 121, Banff, Canada, 2004. Zhang, J., Li, W., and Ogunbona, P. Joint geometrical and statistical alignment for visual domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1859 1867, 2017a. Zhang, K., Sch olkopf, B., Muandet, K., and Wang, Z. Domain adaptation under target and conditional shift. In International Conference on Machine Learning, pp. 819 827, 2013. Zhang, K., Gong, M., and Sch olkopf, B. Multi-source domain adaptation: A causal view. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. Zhang, K., Huang, B., Zhang, J., Glymour, C., and Sch olkopf, B. Causal discovery from nonstationary/heterogeneous data: Skeleton estimation and orientation determination. In IJCAI, volume 2017, pp. 1347, 2017b. Zhang, K., Gong, M., Stojanov, P., Huang, B., LIU, Q., and Glymour, C. Domain adaptation as a problem of inference on graphical models. Advances in Neural Information Processing Systems, 33, 2020. Zhang, W., Ouyang, W., Li, W., and Xu, D. Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3801 3809, 2018. Zhang, Y., Tang, H., Jia, K., and Tan, M. Domain-symmetric networks for adversarial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5031 5040, 2019. Zhao, H., Zhang, S., Wu, G., Moura, J. M., Costeira, J. P., and Gordon, G. J. Adversarial multiple source domain adaptation. In Advances in neural information processing systems, pp. 8559 8570, 2018. Zhu, P., Abdal, R., Femiani, J., and Wonka, P. Mind the gap: Domain gap control for single shot domain adaptation for generative adversarial networks. In International Conference on Learning Representations, 2022. URL https: //openreview.net/forum?id=vq Gi8Kp0w M. Partial Identifiability for Domain Adaptation A. Appendix A.1. Proof of the changing part identifiability We present the proof of the changing part identifiability (Theorem 4.1) in this section. Proof. We start from the matched marginal distribution condition (Equation 2) to develop the relation between z and ˆz as follows: u U, pˆx|u = px|u pˆg(ˆz)|u = pg(z)|u pg 1 ˆg(ˆz)|u|Jg 1| = pz|u|Jg 1| ph(ˆz)|u = pz|u, (12) where ˆg 1 : Z Z denotes the estimated invertible generating function, and h := g 1 ˆg is the transformation between the true latent variable and the estimated one. |Jg 1| stands for the absolute value of Jacobian matrix determinant of g 1. Note that as both ˆg 1 and g are invertible, |Jg 1| = 0 and h is invertible. According to the independence relations in the data generating process in Equation 1 and A2, we have pz|u(z|u) = i=1 pzi|u(zi); pˆz|u(ˆz|u) = i=1 pˆzi|u(ˆzi) Rewriting with the notation qi := log pzi|u and ˆqi := log pˆzi|u yields log pz|u(z|u) = i=1 qi(zi, u); log pˆz|u(ˆz|u) = i=1 ˆqi(ˆzi, u) Applying the change of variables to Equation 12 yields pz|u = pˆz|u |Jh 1| i=1 qi(zi, u) + log |Jh| = i=1 ˆqi(ˆzi, u) (13) where Jh 1 and Jh are the Jacobian matrix of the transformation associated with h 1 and h respectively. For ease of exposition, we adopt the following notation: h i,(k) := zi ˆzk , h i,(k,q) := 2zi ˆzk ˆzq ; η i(zi, u) := qi(zi, u) zi , η i (zi, u) := 2qi(zi, u) Differentiating both sides of Equation 13 twice w.r.t. ˆzk and ˆzq where k, q [n] and k = q yields η i (zi, u) h i,(k)h i,(q) + η i(zi, u) h i,(k,q) + 2 log |Jh| ˆzk ˆzq = 0. (14) Therefore, for u = u0, . . . , u2ns, we have 2ns +1 such equations. Subtracting each equation corresponding to u1, . . . , u2ns with the equation corresponding to u0 results in 2ns equations: (η i (zi, uj) η i (zi, u0)) h i,(k)h i,(q) + (η i(zi, uj) η i(zi, u0)) h i,(k,q) = 0, (15) where j = 1, . . . 2ns. Note that as the content distribution is invariant to domains i.e., pzc = pzc|u. Thus, we have η i (zi, uj) = η i (zi, uj ) and η i(zi, uj) = η i(zi, uj ), j, j . Hence only the style components i = nc + 1, . . . , n remain in the summation of each equation. Partial Identifiability for Domain Adaptation Under the linear independence condition in Assumption A3, the linear system is a 2ns 2ns full-rank system. Therefore, the only solution is h i,(k)h i,(q) = 0 and h i,(k,q) = 0 for i = nc + 1, . . . , n and k, q [n], k = q. As h( ) is smooth over Z, its Jacobian can written as: ˆzc B := zc ˆzs C := zs ˆzc D := zs Note that h i,(k)h i,(q) = 0 implies that for each i = nc + 1, . . . , n, h i,(k) = 0 for at most one element k [n]. Therefore, there is only at most one non-zero entry in each row indexed by i = nc + 1, . . . , n in the Jacobian matrix Jh. Further, the invertibility of h( ) necessitates Jh to be full-rank which implies that there is exactly one non-zero component in each row of matrices C and D. Since for every i {nc + 1, . . . , n}, zi has changing distributions over u and all ˆzk s for i {1, . . . , nc} (i.e. ˆzc) have invariant distributions over u, we can deduce that C = 0 and the only non-zero entry zi ˆzk must reside in D with k {nc + 1, . . . , n}. Therefore, for each true variable in the changing part zi, i {nc + 1, . . . , n}, there exists one estimated variable in the changing part ˆzk, k {nc + 1, . . . , n} such that zi = h i(ˆzk). Further, because Jh is of full-rank (h( ) being invertible) and C is a zero matrix, D must be of full-rank, which implies that h i( ) is invertible for each i {nc + 1, . . . , n}. Thus, the changing components zs are identified up to permutation and component-wise invertible transformations. A.2. Proof of the invariant part identifiability In the section, we present the proof of the invariant part identifiability (Theorem 4.2). Proof. We divide our proof into four steps to aid understanding. In Step 1, we leverage properties of the generating process (Equation 1) and the marginal distribution matching condition (Equation 2) to express the marginal invariance with the indeterminacy transformation h : Z Z between the estimated and the true latent variable. The introduction of h( ) allows us to formalize the block-identifiability condition. In Step 2 and Step 3, we show that the estimated content variable ˆzc does not depend on the true style variable zs, that is, hc(z) does not depend on the input zs. To this end, in Step 2, we derive its equivalent statements which can ease the rest of the proof and avert technical issues (e.g. sets of zero probability measure). In Step 3, we prove the equivalent statement by contradiction. Specifically, we show that if ˆzc depended on zs, the invariance derived in Step 1 would break. In Step 4, we use the conclusion in Step 3, the smooth and bijective properties of h( ), and the conclusion in Theorem 4.1, to show the invertibility of the indeterminacy function between the content variables, i.e., the mapping ˆzc = hc(zc) being invertible. Step 1. As the data generating process of the learned model (ˆg, pˆzs, pˆzs|u) (Equation 1) establishes the independence between the generating process ˆzc pˆzc 4 and u, it follows that for any Azc Zc, P {ˆg 1 1:nc(ˆx) Azc}|{u = u1} = P {ˆg 1 1:nc(ˆx) Azc}|{u = u2} , u1, u2 U P {ˆx (ˆg 1 1:nc) 1(Azc)}|{u = u1} = P {ˆx (ˆg 1 1:nc) 1(Azc)}|{u = u2} , u1, u2 U (17) where ˆg 1 1:nc : X Zc denotes the estimated transformation from the observation to the content variable and (ˆg 1 1:nc) 1(Azc) X is the pre-image set of Azc, that is, the set of estimated observations ˆx originating from content variables ˆzc in Azc. 4 Note that in this proof, we decorate quantities withˆ to indicate that they are associated with the estimated model (ˆg, pˆzc, pˆzs|u). Partial Identifiability for Domain Adaptation Because of the matching observation distributions between the estimated model and the true model in Equation 2, the relation in Equation 17 can be extended to observation x from the true generating process (g, pzc, pzs|u), i.e. , P {x (ˆg 1 1:nc) 1(Azc)}|{u = u1} = P {x (ˆg 1 1:nc) 1(Azc)}|{u = u2} P {ˆg 1 1:nc(x) Azc}|{u = u1} = P {ˆg 1 1:nc(x) Azc}|{u = u2} . (18) Since g and ˆg are smooth and injective, there exists a smooth and injective h = ˆg 1 g : Z Z. We note that by definition h = h 1 where h is introduced in the proof of Theorem 4.1 (Appendix A.1). Expressing ˆg 1 = h g 1 and hc( ) := h1:nc( ) : Z Zc in Equation 18 yields P { hc(z) Azc}|{u = u1} = P { hc(z) Azc}|{u = u2} , P {z h 1 c (Azc)}|{u = u1} = P {z h 1 c (Azc)}|{u = u2} , z h 1 c (Azc) pz|u(z|u1) dz = Z z h 1 c (Azc) pz|u(z|u2) dz, (19) where h 1 c (Azc) = {z Z : hc(z) Azc} is the pre-image of Azc, i.e. those latent variables containing content variables in Azc after the indeterminacy transformation h. Based on the generating process in Equations 1, we can re-write Equation 19 as follows: Azc Zc, Z [z c ,z s ] h 1 c (Azc) pzc(zc) pzs|u(zs|u1) pzs|u(zs|u2) dzsdzc = 0. (20) Step 2. In order to show the block-identifiability of zc, we would like to prove that zc := hc([z c , z s ] ) does not depend on zs. To this end, we first develop one equivalent statement (i.e. Statement 3 below) and prove it in later step instead. By doing so, we are able to leverage the full-supported density function assumption to avert technical issues. Statement 1: hc([z c , z s ] ) does not depend on zs. Statement 2: zc Zc, it follows that h 1 c (zc) = Bzc Zs where Bzc = and Bzc Zc. Statement 3: zc Zc, r R+, it follows that h 1 c (Br(zc)) = B+ zc Zs where Br(zc) := {z c Zc : ||z c zc||2 < r}, B+ zc = , and B+ zc Zc. Statement 2 is a mathematical formulation of Statement 1. Statement 3 generalizes singletons zc in Statement 2 to open, non-empty balls Br(zc). Later, we use Statement 3 in Step 3 to show the contraction to Equation 20. Leveraging the continuity of hc( ), we can show the equivalence between Statement 2 and Statement 3 as follows. We first show that Statement 2 implies Statement 3. zc Zc, r R+, h 1 c ((Br(zc))) = z c Br(zc)h 1 c (z c). Statement 2 indicates that every participating sets in the union satisfies h 1 c (z c) = B zc Zs, thus the union h 1 c ((Br(zc))) also satisfies this property, which is Statement 3. Then, we show that Statement 3 implies Statement 2 by contradiction. Suppose that Statement 2 is false, then ˆzc Zc such that there exist ˆz B c {z1:nc : z h 1 c (ˆzc)} and ˆz B s Zs resulting in hc(ˆz B) = ˆzc where ˆz B = [(ˆz B c ) , (ˆz B s ) ] . As hc( ) is continuous, there exists ˆr R+ such that hc(ˆz B) Bˆr(ˆzc). That is, ˆz B h 1 c (Bˆr(ˆzc)). Also, Statement 3 suggests that h 1 c (Bˆr(ˆzc)) = ˆBzc Zs. By definition of ˆz B, it is clear that ˆz B 1:nc ˆBzc. The fact that ˆz B h 1 c (Bˆr(ˆzc)) contradicts Statement 3. Therefore, Statement 2 is true under the premise of Statement 3. We have shown that Statement 3 implies Statement 2. In summary, Statement 2 and Statement 3 are equivalent, and therefore proving Statement 3 suffices to show Statement 1. Partial Identifiability for Domain Adaptation Step 3. In this step, we prove Statement 3 by contradiction. Intuitively, we show that if hc( ) depended on ˆzs, the preimage h 1 c (Br(zc)) could be partitioned into two parts (i.e. B z and h 1 c (A zc) \ B z defined below). The dependency between hc( ) and ˆzs is captured by B z, which would not emerge otherwise. In contrast, h 1 c (A zc) \ B z also exists when hc( ) does not depend on ˆzs. We evaluate the invariance relation Equation 20 and show that the integral over h 1 c (A zc) \ B z (i.e. T1) is always 0, however, the integral over B z (i.e. T2) is necessarily non-zero, which leads to the contraction with Equation 20 and thus shows the hc( ) cannot depend on ˆzs. First, note that because Br(zc) is open and hc( ) is continuous, the pre-image h 1 c (Br(zc)) is open. In addition, the continuity of h( ) and the matched observation distributions u U, P [{x Ax}|{u = u }] = P [{ˆx Ax}|{u = u }] lead to h( ) being bijection as shown in (Klindt et al., 2020),which implies that h 1 c (Br(zc)) is non-empty. Hence, h 1 c (Br(zc)) is both non-empty and open. Suppose that A zc := Br (z c) where z c Zc, r R+, such that B z := {z Z : z h 1 c (A zc), {z1:nc} Zs h 1 c (A zc)} = . Intuitively, B z contains the partition of the pre-image h 1 c (A zc) that the style part znc+1:n cannot take on any value in Zs. Only certain values of the style part were able to produce specific outputs of indeterminacy hc( ). Clearly, this would suggest that hc( ) depends on zc. To show contraction with Equation 20, we evaluate the LHS of Equation 20 with such a A zc: [z c ,z s ] h 1 c (A zc) pzc(zc) pzs|u(zs|u1) pzs|u(zs|u2) dzsdzc [z c ,z s ] h 1 c (A zc)\B z pzc(zc) pzs|u(zs|u1) pzs|u(zs|u2) dzsdzc | {z } T1 [z c ,z s ] B z pzc(zc) pzs|u(zs|u1) pzs|u(zs|u2) dzsdzc | {z } T2 We first look at the value of T1. When h 1 c (A zc) \ B z = , T1 evaluates to 0. Otherwise, by definition we can rewrite h 1 c (A zc) \ B z as C zc Zs where C zc = and C zc Zc. With this expression, it follows that [z c ,z s ] C zc pzc(zc) pzs|u(zs|u1) pzs|u(zs|u2) dzsdzc zc C zc pzc(zc) Z pzs|u(zs|u1) pzs|u(zs|u2) dzsdzc zc C zc pzc(zc) (1 1) dzc = 0. Therefore, in both cases T1 evaluates to 0 for A zc. Now, we address T2. As discussed above, h 1 c (A zc) is open and non-empty. Because of the continuity of hc( ), z B B z, there exists r(z B) R+ such that Br(z B)(z B) B z. As pz|u(z|u) > 0 over (z, u), we have P [{z B z}|{u = u }] P {z Br(z B)(z B)|{u = u }} > 0 for any u U. Assumption A4 indicates that u 1, u 2, such that [z c ,z s ] B z pzc(zc) pzs|u(zs|u 1) pzs|u(zs|u 2) dzsdzc = 0. Therefore, for such A zc, we would have T1 + T2 = 0 which leads to contradiction with Equation 20. We have proved by contradiction that Statement 3 is true and hence Statement 1 holds, that is, hc( ) does not depend on the style variable zs. Step 4. With the knowledge that hc( ) does not depend on the style variable zs, we now show that there exists an invertible mapping between the true content variable zc and the estimated version ˆzc. Partial Identifiability for Domain Adaptation As h( ) is smooth over Z, its Jacobian can written as: zc B := ˆzc zs C := ˆzs zc D := ˆzs where we use notation ˆzc = h(z)1:nc and ˆzs = h(z)nc+1:n. As we have shown that ˆzc does not depend on the style variable zs, it follows B = 0. On the other hand, as h( ) is invertible over Z, J h is non-singular. Therefore, A must be non-singular due to B = 0. We note that A is the Jacobian of the function h c(zc) := hc(z) : Zc Zc, which takes only the content part zc of the input z into hc. Also, we note that C = 0, becaus of the invertible mapping between zs and ˆzs shown in Theorem 4.1. Together with the invertibility of h, we can conclude that h c is invertible. Therefore, there exists an invertible function h c between the estimated and the true content variables such that ˆzc = h c(zc), which concludes the proof that zc is block-identifiable via ˆg 1( ). A.3. Synthetic Data Experiments We provide additional details of our synthetic experiments in Section 6. Architecture For all methods in our synthetic data experiments, the VAE encoder and decoer are 6-layer MLP s with a hidden dimension of 32 and Leaky-Re LU (α = 0.2) activation functions. For our method, we use component-wise spline flows (Durkan et al., 2019) with monotonic linear rational splines to modulate the change components. We use 8 bins for the linear splines and set the bound to be 5. The β-VAE shares the same VAE model with ours and the i VAE implementation is adopted from (Khemakhem et al., 2020b). Training hyper-parameters We apply Adam W to train VAE and flow models for 100 epochs. We use a learning rate of 0.002 with a batch size of 128. The weight decay parameter of Adam W is set to 0.0001. For VAE training, we set the β parameter of KL loss term to 0.1. A.4. Real-world Data Experiments We provide implementation details and additional visualization of real-world data experiments in Section 7. Dataset description PACS (Li et al., 2017) is a multi-domain dataset containing 9991 images from 4 domains of different styles: Photo, Artpainting, Cartoon, Sketch. These domains share the same seven categories. Office-Home (Venkateswara et al., 2017) dataset consists of 4 domains, with each domain containing images from 65 categories of everyday objects and a total of around 15, 500 images. The domains include 1) Art: artistic depictions of objects; 2) Clipart: collection of clipart images; 3) Product: images of objects without a background; 4) Real-World: images of objects captured with a regular camera. Implementation Details For the PACS dataset, we follow the protocols in (Yang et al., 2020; Wang et al., 2020) and use Res Net-18 pre-trained on Image Net. For the Office-Home dataset, we follow the protocols in (Park & Lee, 2021) and use pre-trained Res Net-50 as our backbone network. For two datasets, we use SGD with Nesterov momentum with learning rate 0.01. Since it can be challenging to train VAE on high-resolution images, we use extracted features as our VAE input. In particular, we use the features after the last pooling layer in the Res Net as our VAE input. We use 2-layer fully connected network as the VAE encoder. For the flow model, we use Deep Sigmoidal Flow (DSF) (Huang et al., 2018) across all our experiments. For hyper-parameters, we fix α1 = 0.1 for all our experiments and select α2 in [1e 5, 5e 5, 1e 4, 5e 4]. Feature Quality In addition, we visualize the learned features by our method in Figure 4 and make comparison with the Source Only baseline. We can observe that, for the baseline the features in different classes are mixed, which renders the classification task significantly harder. In contrast, the features learned by i MSDA are more clustered and discriminative. Partial Identifiability for Domain Adaptation (a) Source Only (b) i MSDA Figure 4. The t-SNE visualizations of learned features on the Sketch task in PACS. Red: source domains, Blue: target domain.