# crossdomain_imitation_learning_via_optimal_transport__316f03ce.pdf Published as a conference paper at ICLR 2022 CROSS-DOMAIN IMITATION LEARNING VIA OPTIMAL TRANSPORT Arnaud Fickinger13 Samuel Cohen23 Stuart Russell1 Brandon Amos3 1Berkeley AI Research 2University College London 3Facebook AI Cross-domain imitation learning studies how to leverage expert demonstrations of one agent to train an imitation agent with a different embodiment or morphology. Comparing trajectories and stationary distributions between the expert and imitation agents is challenging because they live on different systems that may not even have the same dimensionality. We propose Gromov-Wasserstein Imitation Learning (GWIL), a method for cross-domain imitation that uses the Gromov Wasserstein distance to align and compare states between the different spaces of the agents. Our theory formally characterizes the scenarios where GWIL preserves optimality, revealing its possibilities and limitations. We demonstrate the effectiveness of GWIL in non-trivial continuous control domains ranging from simple rigid transformation of the expert domain to arbitrary transformation of the state-action space. 1 1 INTRODUCTION Reinforcement learning (RL) methods have attained impressive results across a number of domains, e.g., Berner et al. (2019); Kober et al. (2013); Levine et al. (2016); Vinyals et al. (2019). However, the effectiveness of current RL method is heavily correlated to the quality of the training reward. Yet for many real-world tasks, designing dense and informative rewards require significant engineering effort. To alleviate this effort, imitation learning (IL) proposes to learn directly from expert demonstrations. Most current IL approaches can be applied solely to the simplest setting where the expert and the agent share the same embodiment and transition dynamics that live in the same state and action spaces. In particular, these approaches require expert demonstrations from the agent domain. Therefore, we might reconsider the utility of IL as it seems to only move the problem, from designing informative rewards to providing expert demonstrations, rather than solving it. However, if we relax the constraining setting of current IL methods, then natural imitation scenarios that genuinely alleviate engineering effort appear. Indeed, not requiring the same dynamics would enable agents to imitate humans and robots with different morphologies, hence widely enlarging the applicability of IL and alleviating the need for in-domain expert demonstrations. This relaxed setting where the expert demonstrations comes from another domain has emerged as a budding area with more realistic assumptions (Gupta et al., 2017; Liu et al., 2019; Sermanet et al., 2018; Kim et al., 2020; Raychaudhuri et al., 2021) that we will refer to as Cross-Domain Imitation Learning. A common strategy of these works is to learn a mapping between the expert and agent domains. To do so, they require access to proxy tasks where both the expert and the agent act optimally in there respective domains. Under some structural assumptions, the learned map enables to transform a trajectory in the expert domain into the agent domain while preserving the optimality. Although these methods indeed relax the typical setting of IL, requiring proxy tasks heavily restrict the applicability of Cross-Domain IL. For example, it rules out imitating an expert never seen before as well as transferring to a new robot. In this paper, we relax the assumptions of Cross-Domain IL and propose a benchmark and method that do not need access to proxy tasks. To do so, we depart from the point of view taken by previous work and formalize Cross-Domain IL as an optimal transport problem. We propose a method, that arnaud.fickinger@berkeley.edu, arnaudfickinger@fb.com 1Project site with videos and code: https://arnaudfickinger.github.io/gwil/ Published as a conference paper at ICLR 2022 Figure 1: The Gromov-Wasserstein distance enables us to compare the stationary state-action distributions of two agents with different dynamics and state-action spaces. We use it as a pseudo-reward for cross-domain imitation learning. Figure 2: Isomorphic policies (definition 2) have the same pairwise distances within the state-action space of the stationary distributions. In Euclidean spaces, isometric transformations preserve these pairwise distances and include rotations, translations, and reflections. we call Gromov Wasserstein Imitation Learning (GWIL), that uses the Gromov-Wasserstein distance to solve the benchmark. We formally characterize the scenario where GWIL preserves optimality (theorem 1), revealing the possibilities and limitations. The construction of our proxy rewards to optimize optimal transport quantities using RL generalizes previous work that assumes uniform occupancy measures (Dadashi et al., 2020; Papagiannis & Li, 2020) and is of independent interest. Our experiments show that GWIL learns optimal behaviors with a single demonstration from another domain without any proxy tasks in non-trivial continuous control settings. 2 RELATED WORK Imitation learning. An early approach to IL is Behavioral Cloning (Pomerleau, 1988; 1991) which amounts to training a classifier or regressor via supervised learning to replicate the expert s demonstration. Another key approach is Inverse Reinforcement Learning (Ng & Russell, 2000; Abbeel & Ng, 2004; Abbeel et al., 2010), which aims at learning a reward function under which the observed demonstration is optimal and can then be used to train a agent via RL. To bypass the need to learn the expert s reward function, Ho & Ermon (2016) show that IRL is a dual of an occupancy measure matching problem and propose an adversarial objective whose optimization approximately recover the expert s state-action occupancy measure, and a practical algorithm that uses a generative adversarial network (Goodfellow et al., 2014). While a number of recent work aims at improving this algorithm relative to the training instability caused by the minimax optimization, Primal Wasserstein Imitation Learning (PWIL) (Dadashi et al., 2020) and Sinkhorn Imitation Learning (SIL) (Papagiannis & Li, 2020) view IL as an optimal transport problem between occupancy measures to completely eliminate the minimax objective and outperforms adversarial methods in terms of sample efficiency. Heess et al. (2017); Peng et al. (2018); Zhu et al. (2018); Aytar et al. (2018) scale imitation learning to complex human-like locomotion and game behavior in non-trivial settings. Our work is an extension of Dadashi et al. (2020); Papagiannis & Li (2020) from the Wasserstein to the Gromov Wasserstein setting. This takes us beyond limitation that the expert and imitator are in the same domain and into the cross-domain setting between agents that live in different spaces. Transfer learning across domains and morphologies. Work transferring knowledge between different domains in RL typically learns a mapping between the state and action spaces. Ammar et al. (2015) use unsupervised manifold alignment to find a linear map between states that have similar Published as a conference paper at ICLR 2022 local geometry but assume access to hand-crafted features. More recent work in transfer learning across viewpoint and embodiment mismatch learn a state mapping without handcrafted features but assume access to paired and time-aligned demonstration from both domains (Gupta et al., 2017; Liu et al., 2018; Sermanet et al., 2018). Furthermore, Kim et al. (2020); Raychaudhuri et al. (2021) propose methods to learn a state mapping from unpaired and unaligned tasks. All these methods require proxy tasks, i.e. a set of pairs of expert demonstrations from both domains, which limit the applicability of these methods to real-world settings. Stadie et al. (2017) have proposed to combine adversarial learning and domain confusion to learn a policy in the agent s domain without proxy tasks but their method only works in the case of small viewpoint mismatch. Zakka et al. (2021) take a goal-driven perspective that seeks to imitate task progress rather than match fine-grained structural details to transfer between physical robots. In contrast, our method does not rely on learning an explicit cross-domain latent space between the agents, nor does it rely on proxy tasks. The Gromov Wasserstein distance enables us to directly compare the different spaces without a shared space. The existing benchmark tasks we are aware of assume access to a set of demonstrations from both agents whereas the experiments in our paper only assume access to expert demonstrations. Finally, other domain adaptation and transfer learning settings use Gromov-Wasserstein variants, e.g. for transfer between word embedding spaces (Alvarez-Melis & Jaakkola, 2018) and image spaces (Vayer et al., 2020b). 3 PRELIMINARIES Metric Markov Decision Process. An infinite-horizon discounted Markov decision Process (MDP) is a tuple (S, A, R, P, p0, γ) where S and A are state and action spaces, P : S A (S) is the transition function, R : S A R is the reward function, p0 (S) is the initial state distribution and γ is the discount factor. We equip MDPs with a distance d : S A R+ and call the tuple (S, A, R, P, p0, γ, d) a metric MDP. Gromov-Wasserstein distance. Let (X, d X , µX ) and (Y, d Y, µY) be two metric measure spaces, where d X , d Y are distances, and µX , µY are measures on their respective spaces2. Optimal transport (Villani, 2009; Peyré et al., 2019) studies how to compare measures. We will use the Gromov Wasserstein distance (Mémoli, 2011) between metric measure spaces, which has been theoretically generalized and further studied in Sturm (2012); Peyré et al. (2016); Vayer (2020) and is defined by GW((X, d X , µX ), (Y, d Y, µY))2 = min u U(µX ,µY) X 2 Y2 |d X (x, x ) d Y(y, y )|2ux,yux ,y , (1) where U(µX , µY) is the set of couplings between the atoms of the measures defined by U(µX , µY) = y Y ux,y = µX (x), y Y, X x X ux,y = µY(y) GW compares the structure of two metric measure spaces by comparing the pairwise distances within each space to find the best isometry between the spaces. Figure 1 illustrates this distance in the case of the metric measure spaces (SE AE, d E, ρπE) and (SA AA, d A, ρπA). 4 CROSS-DOMAIN IMITATION LEARNING VIA OPTIMAL TRANSPORT 4.1 COMPARING POLICIES FROM ARBITRARILY DIFFERENT MDPS For a stationary policy π acting on a metric MDP (S, A, R, P, γ, d), the occupancy measure is: ρπ : S A R ρ(s, a) = π(a|s) t=0 γt P(st = s|π). We compare policies from arbitrarily different MDPs in terms of their occupancy measures. 2We use discrete spaces for readability but show empirical results in continuous spaces. Published as a conference paper at ICLR 2022 Definition 1 (Gromov-Wasserstein distance between (isomorphic classes of) policies3). Given an expert policy πE and an agent policy πA acting, respectively, on ME = (SE, AE, RE, PE, TE, d E) and MA = (SA, AA, RA, PA, TA, d A). We define the Gromov-Wasserstein distance between πE and πA as the Gromov-Wasserstein distance between the metric measure spaces (SE AE, d E, ρπE) and (SA AA, d A, ρπA)4: GW(π, π ) = GW((SE AE, d E, ρπE), (SA AA, d A, ρπA)). (2) We now define isomorphisms between policies by comparing the state-action marginals and show that GW defines a distance between them. Figure 2 illustrates simple isomorphic policies. Definition 2 (Isomorphic policies). Two policies πE and πA are isomorphic if there exists a bijection ϕ : supp[ρπE] supp[ρπA] that satisfies for all (s E, a E), (s E , a E ) supp[ρπE] and (s A, a A) supp[ρπA]: d E ((s E, a E), (s E , a E )) = d A (ϕ(s E, a E), ϕ(s E , a E )) (3) ρπA(s A, a A) = ρπE(ϕ 1(s E, a E)) (4) In other words, ϕ is an isometry between (supp[ρπE], d E) and (supp[ρπA], d A) and ρπA is the push-forward measure ϕ (ρπE). Proposition 1. GW defines a metric on the collection of all isomorphic classes of policies. Proof. By definition 1, GW(πE, πA) = 0 if and only if GW((SE, d E, ρπE), (SA, d A, ρπA)) = 0. By Mémoli (2011, Theorem 5.1), this is true if and only if there is an isometry ϕ : supp[ρϕE] supp[ρϕA] such that ρπA = ϕ (ρπE). By definition 2, this is true if and only if πA and πE are isomorphic. The symmetry and triangle inequality follow from Mémoli (2011, Theorem 5.1). The next theorem5 gives a sufficient condition to recover, by minimizing GW, an optimal policy6 in the agent s domain up to an isometry. Theorem 1. Consider two MDPs ME = (SE, AE, RE, PE, p E, γ) and MA = (SA, AA, RA, PA, p A, γ). Suppose that there exists four distances d S E, d A E, d S A, d A A defined on SE, AE, SA and AE respectively, and two isometries ϕ : (SE, d S E) (SA, d S A) and ψ : (AE, d S E) (AS, d S A) such that for all (s E, a E, s E) SE AE SE the three following conditions hold: R(s E, a E) = RA(ϕ(s E), ψ(a E)) (5) PEs E,a E(s E) = PAϕ(s E)ψ(a E)(ϕ(s E)) (6) p E(s E) = p A(ϕ(s E)). (7) Consider an optimal policy π E in ME. Suppose that πGW minimizes GW(π E, πGW ) with d E : (s E, a E) 7 d S E(s E) + d A E(a E) and d A : (s A, a A) 7 d S A(s A) + d A A(a A). Then πGW is isomorphic to an optimal policy in MA. Proof. Consider the occupancy measure ρ A : SA AA R given by (s A, a A) 7 ρπ E(ϕ 1(s A), ψ 1(a A)). We first show that ρ A is feasible in MA, i.e. there exists a policy π A acting in MA with occupancy measure ρ A (a). Then we show that π A is optimal in MA (b) and is isomorphic to π E (c). Finally we show that πGW is isomorphic to π A, which concludes the proof (d). 3We later show that it is actually not a distance on policies but on isomorphic classes of policies. 4We always consider a policy in the context of the underlying metric MDP, such that every policy acting on (S, A, R, P, T, d E) are different from every policy acting on (S, A, R, P, T, d A) as soon as d E = d A. This guarantees that the Gromov-Wasserstein distance respects the identity of indiscernibles. 5Our proof is in finite state-action spaces for readability and can be directly extended to infinite spaces. 6A policy is optimal in the MDP (S, A, R, P, γ, d) if it maximizes the expected return E P t=0 R(st, at). Published as a conference paper at ICLR 2022 (a) Consider s A SA. By definition of ρ A, X a A AA ρ A(s A) = X a A AA ρπ E(ϕ 1(s A), ψ 1(a A)) = X a E AE ρπ E(ϕ 1(s A), a E). Since ρπ E is feasible in M, it follows from Puterman (2014, Theorem 6.9.1) that X a E AE ρπ E(ϕ 1(s A), a E) = p E(ϕ 1(s A)) + γ X s E SE,a E AE PEs E,a E(ϕ 1(s A)) + ρπ E(s E, a E). By conditions 6 and 7 and by definition of ρ A, p E(ϕ 1(s A)) + γ X s E SE,a E AE PEs E,a E(ϕ 1(s A)) + ρπ E(s E, a E) = p A(s A) + γ X s E SE,a E AE PAϕ(s E),ψ(a E)(s A) + ρ A(ϕ(s E), ψ(a E)) = p A(s A) + γ X s A SA,a A AA PAs A,a A(s A) + ρ A(s A, a A). It follows that X a A AA ρ A(s A) = p A(s A) + γ X s A SA,a A AA PAs A,a A(s A) + ρ A(s A, a A). Therefore, by Puterman (2014, Theorem 6.9.1), ρ A is feasible in MA, i.e. there exists a policy π A acting in MA with occupancy measure ρ A. (b) By condition 7 and definition of ρ A, the expected return of π A in MA is then X s A SA,a A AA ρ A(s A, a A)RA(s A, a A) s A SA,a A AA ρ E(ϕ 1(s A), ψ 1(a A))RE(ϕ 1(s A), ψ 1(a A)) s E SE,a E AE ρ E(s E, a E)RE(s E, a E) Consider any policy πA in M . By condition 7, the expected return of πA is X s A SA,a A AA ρπA(s A, a A)RA(s A, a A) = X s E SE,a E AE ρπA(ϕ(s E), ψ(a E))RE(s E, a E). Using the same arguments that we used to show that ρ A is feasible in M , we can show that (s E, a E) 7 ρπA(ϕ(s E), ψ(a E)) is feasible in M. It follows by optimality of π E in M that X s E SE,a E AE ρπA(ϕ(s E), ψ(a E))RE(s E, a E) X s E SE,a E AE ρπ E(ϕ(s E), ψ(a E))RE(s E, a E) s A SA,a A AA ρ A(s A, a A)RA(s A, a A). It follows that π A is optimal in M . (c) Notice that ξ : (s E, a E) 7 (ϕ(s E), ψ(a E)) is an isometry between (SE AE, d E) and (SA AA, d A), where d E and d A and given, resp., by (s E, a E) 7 d S E(s E) + d A E(a E) and (s A, a A) 7 d S A(s A) + d A A(a A). Furthermore, by definition, ρ A = ξ (ρ E). Therefore by definition 2, π A is isomorphic to π E. Published as a conference paper at ICLR 2022 Algorithm 1 Gromov-Wasserstein imitation learning from a single expert demonstration. Inputs: expert demonstration τ, metrics on the expert (d E) and agent (d A) space Initialize the imitation agent s policy πθ and value estimates Vθ while Unconverged do Collect an episode τ Compute GW(τ, τ ) Set pseudo-rewards r with eq. (9) Update πθ and Vθ to optimize the pseudo-rewards end while (d) Recall from the statement of the theorem that πGW is a minimizer of GW(π E, πGW ). Since π A is isomorphic to π E, it follows from prop. 1 that GW(π E, π A) = 0. Therefore GW(π E, πGW ) must be 0. By prop. 1, it follows that there exists an isometry χ : (supp[ρ E], d E) (supp[ρπGW ], d A) such that ρπGW = χ (ρ E). Notice that χ ξ 1|supp[ρ A] is an isometry from (supp[ρ A], d A) to (supp[ρπGW ], d A) and ρπGW = (χ ξ 1|supp[ρ A]) (ρ A). It follows by definition 2 that πGW is isomorphic to π A, an optimal policy in MA, which concludes the proof. Remark 1. Theorem 1 shows the possibilities and limitations of our method. It shows that our method can recover optimal policies even though arbitrary isometries are applied to the state and action spaces of the expert s domain. Importantly, we don t need to know the isometries, hence our method is applicable to a wide range of settings. We will show empirically that our method produces strong results in other settings where the environment are not isometric and don t even have the same dimension. However, a limitation of our method is that it recovers optimal policy only up to isometries. We will see that in practice, running our method on different seeds enables to find an optimal policy in the agent s domain. 4.2 GROMOV-WASSERSTEIN IMITATION LEARNING Minimizing GW between an expert and agent requires derivatives through the transition dynamics, which we typically don t have access to. We introduce a reward proxy suitable for training an agent s policy that minimizes GW via RL. Figure 1 illustrates the method. For readability, we combine expert state and action variables (s E, a E) into single variables z E, and similarly for agent state-action pairs. Also, we define ZE = SE AE and ZA = SA AA. Definition 3. Given an expert policy πE and an agent policy πA, the Gromov-Wasserstein reward of the agent is defined as r GW : supp[ρπA] R given by r GW(z A) = 1 ρπ(z A) z E ZE z E ZE z A ZA |d E(z E, z E)) d A(z A, z A)|2u z E,z Au z E,z A where u is the coupling minimizing objective 1. Proposition 2. If πA minimizes GW(πE, πA), then πA is an optimal policy for the reward r GW as defined in definition 3. Proof. Suppose that πA minimizes GW(πE, πA), then by definition 1 πA maximizes z E ZE z E ZE z A ZA z A ZA |d E(z E, z E) d A(z A, z A)|2u z A,z Eu z A,z E z A supp[ρπA ] ρπA(z A) ρπA(z A) z E ZE z E ZE z A ZA |d E(z E, z E) d A(z A, z A)|2u z A,z Eu z A,z E Therefore, by Puterman (2014, Theorem 6.9.4), πA is an optimal policy for reward r GW. Published as a conference paper at ICLR 2022 In practice we approximate the occupancy measures of π by ˆρπ(s, a) = 1 T PT t=1 1(s = st a = at) where τ = (s1, a1, .., s T , a T ) is a finite trajectory collected with π. Assuming that all state-action pairs in the trajectory are different7, ˆρ is a uniform distribution. Given an expert trajectory τE and an agent trajectory τA 8, the (squared) Gromov-Wasserstein distance between the empirical occupancy measures is GW2(τE, τA) = min θ ΘTE TA 1 i,i TE 1 j,j TA |d E((s E i , a E i ), (s E i , a E i )) d A((s A j , s A j ), (s A j , a A j ))|2θi,jθi ,j where Θ is the set of is the set of couplings between the atoms of the uniform measures defined by θ RT T i [T], X j [T ] θi,j = 1/T, j [T ], X i [T ] θi,j = 1/T In this case the reward is given for every state-action pairs in the trajectory by: r(s A j , s A j ) = TA X 1 i,i TE 1 j TA |d E((s E i , a E i ), (s E i , a E i )) d A((s A j , s A j ), (s A j , a A j ))|2θ i,jθ i ,j (9) where θ is the coupling minimizing objective 8. In practice we drop the factor TA because it is the same for every state-action pairs in the trajectory. Remark 2. The construction of our reward proxy is defined for any occupancy measure and extends to previous work optimizing optimal transport quantities via RL that assumes uniform occupancy measure in the form of a trajectory to bypass the need for derivatives through the transition dynamics (Dadashi et al., 2020; Papagiannis & Li, 2020). Computing the pseudo-rewards. We compute the Gromov-Wasserstein distance using Peyré et al. (2016, Proposition 1) and its gradient using Peyré et al. (2016, Proposition 2). To compute the coupling minimizing 8, we use the conditional gradient method as in Ferradans et al. (2013). Optimizing the pseudo-rewards. The pseudo-rewards we obtain from GW for the imitation agent enable us to turn the imitation learning problem into a reinforcement learning problem (Sutton & Barto, 2018) to find the optimal policy for the Markov decision process induced by the pseudorewards. We consider agents with continuous state-action spaces and thus do policy optimization with the soft actor-critic algorithm (Haarnoja et al., 2018). Algorithm 1 sums up GWIL in the case where a single expert trajectory is given to approximate the expert occupancy measure. 5 EXPERIMENTS We propose a benchmark set for cross-domain IL methods consisting of 3 tasks and aiming at answering the following questions: 1. Does GWIL recover optimal behaviors when the agent domain is a rigid transformation of the expert domain? Yes, we demonstrate this with the maze in sect. 5.1. 2. Can GWIL recover optimal behaviors when the agent has different state and action spaces than the expert? Yes, we show in sect. 5.2 for slightly different state-action spaces between the cartpole and pendulum, and in sect. 5.3 for significantly different spaces between a walker and cheetah. To answer these three questions, we use simulated continuous control tasks implemented in Mujoco (Todorov et al., 2012) and the Deep Mind control suite (Tassa et al., 2018). We include videos of 7We can add the time step to the state to distinguish between two identical state-action pairs in the trajectory. 8Note that the Gromov-Wasserstein distance defined in equ. (6) does not depend on the temporal ordering of the trajectories. Published as a conference paper at ICLR 2022 Figure 3: Given a single expert trajectory in the expert s domain (a), GWIL recovers an optimal policy in the agent s domain (b) without any external reward, as predicted by theorem 1. The green dot represents the initial state position and the episode ends when the agent reaches the goal represented by the red square. Figure 4: Given a single expert trajectory in the pendulum s domain (above), GWIL recovers the optimal behavior in the agent s domain (cartpole, below) without any external reward. learned policies on our project site9. In all settings we use the Euclidean metric within the expert and agent spaces for d E and d A. 5.1 AGENT DOMAIN IS A RIGID TRANSFORMATION OF THE EXPERT DOMAIN We evaluate the capacity of IL methods to transfer to rigid transformation of the expert domain by using the Point Mass Maze environment from Hejna et al. (2020). The agent s domain is obtained by applying a reflection to the expert s maze. This task satisfies the condition of theorem 1 with ϕ being the reflection through the central horizontal plan and ψ being the reflection through the x-axis in the action space. Therefore by theorem 1, the agent s optimal policy should be isomorphic to the policy trained using GWIL. By looking at the geometry of the maze, it is clear that every policy in the isometry class of an optimal policy is optimal. Therefore we expect GWIL to recover an optimal policy in the agent s domain. Figure 3 shows that GWIL indeed recovers an optimal policy. 5.2 AGENT AND THE EXPERT HAVE SLIGHTLY DIFFERENT STATE AND ACTION SPACES We evaluate here the capacity of IL methods to transfer to transformation that does not have to be rigid but description map should still be apparent by looking at the domains. A good example of such transformation is the one between the pendulum and cartpole. The pendulum is our expert s domain while cartpole constitutes our agent s domain. The expert is trained on the swingup task. Even though the transformation is not rigid, GWIL is able to recover the optimal behavior in the agent s domain as shown in fig. 4. Notice that pendulum and cartpole do not have the same state-action 9https://arnaudfickinger.github.io/gwil/ Published as a conference paper at ICLR 2022 Figure 5: Given a single expert trajectory in the cheetah s domain (above), GWIL recovers the two elements of the optimal policy s isometry class in the agent s domain (walker), moving forward which is optimal (middle) and moving backward which is suboptimal (below). Interestingly, the resulting walker behaves like a cheetah. space dimension: The pendulum has 3 dimensions while the cartpole has 5 dimensions. Therefore GWIL can indeed be applied to transfer between problems with different dimension. 5.3 AGENT AND THE EXPERT HAVE SIGNIFICANTLY DIFFERENT STATE AND ACTION SPACES We evaluate here the capacity of IL methods to transfer to non-trivial transformation between domains. A good example of such transformation is two arbitrarily different morphologies from the Deep Mind Control Suite such as the cheetah and walker. The cheetah constitutes our expert s domain while the walker constitutes our agent s domain. The expert is trained on the run task. Although the mapping between these two domains is not trivial, minimizing the Gromov Wasserstein solely enables the walker to interestingly learn to move backward and forward by imitating a cheetah. Since the isometry class of the optimal policy moving forward of the cheetah and walker contains a suboptimal element moving backward , we expect GWIL to recover one of these two trajectories. Indeed, depending on the seed used, GWIL produces a cheetah-imitating walker moving forward or a cheetah-imitating walker moving backward, as shown in fig. 5. 6 CONCLUSION Our work demonstrates that optimal transport distances are a useful foundational tool for crossdomain imitation across incomparable spaces. Future directions include exploring: 1. Scaling to more complex environments and agents towards the goal of transferring the structure of many high-dimensional demonstrations of complex tasks into an agent. 2. The use of GW to help agents explore in extremely sparse-reward environments when we have expert demonstrations available from other agents. 3. How GW compares to other optimal transport distances that work apply between two metric MDPs, such as Alvarez-Melis et al. (2019), that have more flexibility over how the spaces are connected and what invariances the coupling has. 4. Metrics aware of the MDP s temporal structure such as Zhou & Torre (2009); Vayer et al. (2020a); Cohen et al. (2021) that build on dynamic time warping (Müller, 2007). The Gromov-Wasserstein ignores the temporal information and ordering present within the trajectories. Published as a conference paper at ICLR 2022 Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, pp. 1, 2004. Pieter Abbeel, Adam Coates, and Andrew Y Ng. Autonomous helicopter aerobatics through apprenticeship learning. The International Journal of Robotics Research, 29(13):1608 1639, 2010. David Alvarez-Melis and Tommi S. Jaakkola. Gromov-wasserstein alignment of word embedding spaces. In Empirical Methods in Natural Language Processing, 2018. David Alvarez-Melis, Stefanie Jegelka, and Tommi S. Jaakkola. Towards optimal transport with global invariances. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pp. 1870 1879. PMLR, 16 18 Apr 2019. URL https://proceedings.mlr.press/v89/alvarez-melis19a.html. Haitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew E Taylor. Unsupervised cross-domain transfer in policy gradient reinforcement learning via manifold alignment. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. Yusuf Aytar, Tobias Pfaff, David Budden, Tom Le Paine, Ziyu Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. ar Xiv preprint ar Xiv:1805.11592, 2018. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław D ebiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. ar Xiv preprint ar Xiv:1912.06680, 2019. Samuel Cohen, Giulia Luise, Alexander Terenin, Brandon Amos, and Marc Deisenroth. Aligning time series on incomparable spaces. In International Conference on Artificial Intelligence and Statistics, pp. 1036 1044. PMLR, 2021. Robert Dadashi, Léonard Hussenot, Matthieu Geist, and Olivier Pietquin. Primal wasserstein imitation learning. ar Xiv preprint ar Xiv:2006.04678, 2020. Sira Ferradans, Nicolas Papadakis, Julien Rabin, Gabriel Peyré, and Jean-François Aujol. Regularized discrete optimal transport. In International Conference on Scale Space and Variational Methods in Computer Vision, pp. 428 439. Springer, 2013. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014. Abhishek Gupta, Coline Devin, Yu Xuan Liu, Pieter Abbeel, and Sergey Levine. Learning invariant feature spaces to transfer skills with reinforcement learning. ar Xiv preprint ar Xiv:1703.02949, 2017. Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. ar Xiv preprint ar Xiv:1812.05905, 2018. Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, SM Eslami, et al. Emergence of locomotion behaviours in rich environments. ar Xiv preprint ar Xiv:1707.02286, 2017. Donald Hejna, Lerrel Pinto, and Pieter Abbeel. Hierarchically decoupled imitation for morphological transfer. In International Conference on Machine Learning, pp. 4159 4171. PMLR, 2020. Jonathan Ho and S. Ermon. Generative adversarial imitation learning. In NIPS, 2016. Kuno Kim, Yihong Gu, Jiaming Song, Shengjia Zhao, and Stefano Ermon. Domain adaptive imitation learning. In International Conference on Machine Learning, pp. 5286 5295. PMLR, 2020. Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238 1274, 2013. Published as a conference paper at ICLR 2022 Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334 1373, 2016. Fangchen Liu, Zhan Ling, Tongzhou Mu, and Hao Su. State alignment-based imitation learning. ar Xiv preprint ar Xiv:1911.10947, 2019. Yu Xuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1118 1125. IEEE, 2018. Facundo Mémoli. Gromov wasserstein distances and the metric approach to object matching. Foundations of computational mathematics, 11(4):417 487, 2011. Meinard Müller. Dynamic time warping. Information retrieval for music and motion, pp. 69 84, 2007. Andrew Y. Ng and Stuart J. Russell. Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning, ICML 00, pp. 663 670, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. ISBN 1558607072. Georgios Papagiannis and Yunpeng Li. Imitation learning with sinkhorn distances. ar Xiv preprint ar Xiv:2008.09167, 2020. Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. Deepmimic: Exampleguided deep reinforcement learning of physics-based character skills. ACM Transactions on Graphics (TOG), 37(4):1 14, 2018. Gabriel Peyré, Marco Cuturi, and Justin Solomon. Gromov-wasserstein averaging of kernel and distance matrices. In International Conference on Machine Learning, pp. 2664 2672. PMLR, 2016. Gabriel Peyré, Marco Cuturi, et al. Computational optimal transport: With applications to data science. Foundations and Trends in Machine Learning, 11(5-6):355 607, 2019. D. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In NIPS, 1988. D. Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3:88 97, 1991. Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. Dripta S Raychaudhuri, Sujoy Paul, Jeroen van Baar, and Amit K Roy-Chowdhury. Cross-domain imitation from observations. ar Xiv preprint ar Xiv:2105.10037, 2021. Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. Time-contrastive networks: Self-supervised learning from video. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 1134 1141. IEEE, 2018. Bradly C Stadie, Pieter Abbeel, and Ilya Sutskever. Third-person imitation learning. ar Xiv preprint ar Xiv:1703.01703, 2017. Karl-Theodor Sturm. The space of spaces: curvature bounds and gradient flows on the space of metric measure spaces. ar Xiv preprint ar Xiv:1208.0434, 2012. Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. ar Xiv preprint ar Xiv:1801.00690, 2018. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026 5033. IEEE, 2012. Published as a conference paper at ICLR 2022 Titouan Vayer. A contribution to optimal transport on incomparable spaces. ar Xiv preprint ar Xiv:2011.04447, 2020. Titouan Vayer, L. Chapel, N. Courty, Rémi Flamary, Yann Soullard, and R. Tavenard. Time series alignment with global invariances. Ar Xiv, abs/2002.03848, 2020a. Titouan Vayer, Ievgen Redko, Rémi Flamary, and Nicolas Courty. Co-optimal transport. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 17559 17570. Curran Associates, Inc., 2020b. URL https://proceedings.neurips.cc/paper/2020/file/ cc384c68ad503482fb24e6d1e3b512ae-Paper.pdf. Cédric Villani. Optimal transport: old and new, volume 338. Springer, 2009. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350 354, 2019. Kevin Zakka, Andy Zeng, Pete Florence, Jonathan Tompson, Jeannette Bohg, and Debidatta Dwibedi. Xirl: Cross-embodiment inverse reinforcement learning. ar Xiv preprint ar Xiv:2106.03911, 2021. Feng Zhou and Fernando Torre. Canonical time warping for alignment of human behavior. Advances in neural information processing systems, 22:2286 2294, 2009. Yuke Zhu, Ziyu Wang, Josh Merel, Andrei Rusu, Tom Erez, Serkan Cabi, Saran Tunyasuvunakool, János Kramár, Raia Hadsell, Nando de Freitas, et al. Reinforcement and imitation learning for diverse visuomotor skills. ar Xiv preprint ar Xiv:1802.09564, 2018. Published as a conference paper at ICLR 2022 A OPTIMIZATION OF THE PROXY REWARD In this section we show that the proxy reward introduced in equation 9 constitutes a learning signal that is easy to optimize using standards RL algorithms. Figure 6 shows proxy reward curves across 5 different seeds for the 3 environments. We observe that in each environment the SAC learner converges quickly and consistently to the asymptotic episodic return. Thus there is reason to think that the proxy reward introduced in equation 9 will be similarly easy to optimize in other crossdomain imitation settings. Figure 6: The proxy reward introduced in equation 9 gives a learning signal that is easily optimized using a standard RL algorithm. B TRANSFER TO SPARSE-REWARD ENVIRONMENTS In this section we show that GWIL can be used to facilitate learning in sparse-reward environments when the learner has only access to one expert demonstration from another domain. We compare GWIL to a baseline learner having access to a single demonstration from the same domain and minimizing the Wasserstein distance, as done in Dadashi et al. (2020). In these experiments, both agents are given a sparse reward signal in addition to their respective optimal transport proxy reward. We perform experiments in two sparse-reward environment. In the first environment, the agent controls a point mass in a maze and obtain a non-zero reward only if it reaches the end of the maze. In the second environment, which is a sparse version of cartpole, the agent controls a cartpole and obtains a non-zero reward only if he can maintain the cartpole up for 10 consecutive time steps. Note that a SAC agent fails to learn any meaningful behavior in both environments. Figure 7 shows that GWIL is competitive with the baseline learner in the sparse maze environment even though GWIL has only access to a demonstration from another domain, while the baseline learner has access to a demonstration from the same domain. Thus there is reason to think that GWIL efficiently and reliably extracts useful information from the expert domain and hence should work well in other cross-domain imitation settings. Published as a conference paper at ICLR 2022 Figure 7: In sparse-reward environments, GWIL obtains similar performance than a baseline learner minimizing the Wasserstein distance to an expert in the same domain. C SCALABILITY OF GWIL In this section we show that our implementation of GWIL offers good performance in terms of wallclock time. Note that the bottleneck of our method is in the computation of the optimal coupling which only depends on the number of time steps in the trajectories, and not on the dimension of the expert and the agent. Hence our method naturally scales with the dimension of the problems. Furthermore, while we have not used any entropy regularizer in our experiments, entropy regularized methods have been introduced to enable Gromov-Wasserstein to scale to demanding machine learning tasks and can be easily incorporated into our code to further improve the scalability. Figure 8 compares the time taken by GWIL in the maze with the time taken by the baseline learner introduced in the previous section. It shows that imitating with Gromov-Wasserstein requires the same order of time than imitating with Wasserstein. Figure 9 compares the wall-clock time taken by a walker imitating a cheetah using GWIL to reach a walking speed (i.e., a horizontal velocity of 1) and the wall-clock time taken by a SAC walker trained to run. It shows that a GWIL walker imitating a cheetah reaches a walking speed faster than a SAC agent trained to run. Even though the SAC agent is optimizing for standing in addition to running, it was not obvious that GWIL could compete with SAC in terms of wall-clock time. These results gives hope that GWIL has the potential to scale to Published as a conference paper at ICLR 2022 more complex problems (possibly with an additional entropy regularizer) and be a useful way to learn by analogy. Figure 8: In the sparse maze environment, GWIL requieres the same order of wall-clock time than a baseline learner minimizing the Wasserstein distance to an expert in the same domain. Figure 9: A GWIL walker imitating a cheetah reaches a walking speed faster than a SAC walker trained to run in terms of wall-clock time.