# imitation_learning_via_kernel_mean_embedding__ad583149.pdf Imitation Learning via Kernel Mean Embedding Kee-Eung Kim School of Computer Science KAIST kekim@cs.kaist.ac.kr Hyun Soo Park Department of Computer Science and Engineering University of Minnesota hspark@umn.edu Imitation learning refers to the problem where an agent learns a policy that mimics the demonstration provided by the expert, without any information on the cost function of the environment. Classical approaches to imitation learning usually rely on a restrictive class of cost functions that best explains the expert s demonstration, exemplified by linear functions of pre-defined features on states and actions. We show that the kernelization of a classical algorithm naturally reduces the imitation learning to a distribution learning problem, where the imitation policy tries to match the state-action visitation distribution of the expert. Closely related to our approach is the recent work on leveraging generative adversarial networks (GANs) for imitation learning, but our reduction to distribution learning is much simpler, robust to scarce expert demonstration, and sample efficient. We demonstrate the effectiveness of our approach on a wide range of high-dimensional control tasks. In imitation learning, an agent learns to behave by mimicking the demonstration provided by the expert, situated in an environment with an unknown cost function. A classical approach to imitation learning is behavioral cloning, where the policy is directly learned to map from states to actions by supervised learning (Sammut 2010). Unfortunately, this straightforward approach does not generalize well to unseen states, often requiring a large amount of training data. A more principled approach is apprenticeship learning (AL), where the policy is sought that is guaranteed to perform at least as well as the expert (Russell 1998; Ng, Russell, and others 2000; Abbeel and Ng 2004). However, to formally meet the guarantee, AL algorithms typically assume a restrictive class of cost functions and a planner that yields a sufficiently accurate optimal policy for a cost function. This does not reflect the complex nature of high-dimensional dynamics in real-world problems. On the other hand, deep neural networks have been shown strong predictive power to model complex functions: the parametric function via networks is highly flexible and expressive, which can be efficiently trained by stochastic gradient descent. Representing the cost function and the agent policy using neural networks shall yield a plausible policy that faithfully imitates the expert s demonstrated behaviors Copyright c 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. in high-dimensional control tasks. In this line of thought, Ho and Ermon (2016) presented generative adversarial imitation learning (GAIL), which casts the objective of imitation learning as the training objective of generative adversarial networks (GANs) (Goodfellow et al. 2014). The key insight behind GAIL is that imitation learning reduces to matching the state-action visitation distributions (i.e. occupancy measure) of the learned policy to that of the expert policy, under a suitable choice of the penalty on the cost function. However, GAIL often exhibits unstable training in practice due to the alternating optimizations of the generator and discriminator networks to address the minimax objective function, a well-known challenge in training GANs. In this work, we show that extending the class of cost functions to the reproducing kernel Hilbert space (RKHS) alternatively reduces the imitation learning to the distribution learning problem under the maximum mean discrepancy (MMD), a metric on probability distributions defined in the RKHS. However, our derivation is much simpler and more natural. Although the derivation is almost immediate, our work is the first to present the kernelization of a classical AL algorithm (Abbeel and Ng 2004), and establish analogies with the state of the art imitation learning algorithm, i.e. GAIL. The advantage of our work is that the training becomes simpler yet robust to local optima since the hard minimax optimization is avoided. As an end result, our work becomes closely related to generative moment matching networks (GMMNs) (Li, Swersky, and Zemel 2015) and MMD nets (Dziugaite, Roy, and Ghahramani 2015), two approaches to training deep generative neural networks using the MMD. Our experiments on the same set of highdimensional control imitation tasks with the identical settings as in the GAIL paper, with the largest task involving 376 observation and 17 action dimensions, demonstrate that the proposed approach performs better than or on a par with GAIL, and significantly outperforms GAIL particularly when the expert demonstration is scarce, with performance gain up to 41%. Background MDPs and Imitation Learning We define basic notation for our problem setting and briefly review relevant work in the literature. We assume learning in an environment that can be modeled as a Markov decision process (MDP), with The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) state space S, action space A, transition model p(s |s, a), initial state distribution p0(s), and cost function c(s, a). We assume the total discounted cost with discount rate γ, so that the long-term cost of a policy π : S p(A) is defined as Jc(π) Es0,a0, [ t=0 γtc(st, at)], where the trajectory [s0, a0, ] is generated by s0 p0(s0), st+1 p(st+1|st, at), at π(at|st), t 0. We also define the state-action value function Qπ c (s, a) Est=s,at=a, [ t =t γt tc(st , at )], the state value function V π c (s) Ea π( |s)[Qπ c (s, a)] and the advantage function Aπ c (s, a) Qπ c (s, a) V π c (s). For any policy π, there is a one-to-one correspondence between the policy and its occupancy measure (Puterman 1994) ρπ(s, a) Es0,a0, [ t=0 γtδs,a(st, at)] = t=0 γtp(st = s, at = a) (1) where δ is the Kronecker delta function. Essentially, this is an unnormalized visitation distribution over states and actions, which sums to 1/(1 γ). The long-term cost is then simply the expected cost under the occupancy measure, s,a c(s, a)ρπ(s, a) Es,a ρπ[c(s, a)]. When the state and the action spaces are large or continuous, the classical approach is to work with pre-defined features φ(s, a) ℜK and assume a linear cost function in terms of features: c(s, a) w φ(s, a). In this case, it is easy to see that using the feature expectation μπ Es0,a0, [ t=0 γt φ(st, at)] = Es,a ρπ[ φ(s, a)], (2) the long-term cost is Jc(π) = w μπ by the linearity of expectation. In imitation learning, we assume demonstration dataset DπE provided by the expert, whose behavior is governed by policy πE unknown to the agent. The goal is to compute policy π that best approximates πE given the demonstration dataset, without any further information about the underlying MDP model, except a simulator that can sample trajectories given any policy π. Apprenticeship learning (AL) is an approach to imitation learning, which formalizes this problem by defining a class of cost functions C and seeks policy π that performs as well as πE for all c C, i.e. Jc(π) Jc(πE) for all c C Note that if the true cost function is in C, the policy π that satisfies this inequality is guaranteed to perform as well as, or better than, πE. This constraint satisfaction problem can be reformulated as an optimization problem, with the objective min π ψ C(π, πE) where ψ C(π, πE) = max c C [Jc(π) Jc(πE)] (3) The AL algorithm by Abbeel and Ng (2004) can be seen as choosing the cost function class Cℓ2 = c(s, a) = w φ(s, a) w 2 1 which yields ψ Cℓ2 (π, πE) = max w 2 1[ w μπ w μπE] (4) whereas Multiplicative Weights Apprenticeship Learning (MWAL) by Syed and Schapire (2007) is associated with the cost function class CΔ = w φ(s, a) w Δ where Δ denotes the simplex constraint, i.e. w 0 and wi = 1. This yields ψ CΔ(π, πE) = max w Δ[ w μπ w μπE] (5) The minimax optimization problem in Eqn. (3) typically involves repetitive computations of optimal policies for intermediate cost functions (Abbeel and Ng 2004; Syed and Schapire 2007). This is intractable for large-scale problems, especially those with continuous action spaces. Ho, Gupta, and Ermon (2016) presented a gradient-based stochastic optimization approach, where the parameterized policy (typically a neural network) πθ is found by alternating between computing the cost function c that attains maximum in ψ C with fixed πθ, and then improving πθ by gradient descent using θψ c (πθ, πE). This approach can be used with different cost function classes, e.g. Cℓ2 or CΔ. In the experiments, we will refer to these two gradient-based versions of AL as Feature Expectation Matching (FEM) and Game-Theoretic Apprenticeship Learning (GTAL). Kernel Mean Embedding and MMD This paper seeks a kernel-based imitation learning algorithm that does not rely on explicit features φ(s, a). The very first step is to realize that the feature expectation given in Eqn. (2) is actually the mean embedding of the distribution ρπ using the feature φ. This motivates the use of kernel mean embedding, which extends the classical kernel approach to probability distributions (Smola et al. 2007). Specifically, choosing a kernel implies an implicit feature map φ that represents a probability distribution as a mean function, X k(x, )dp(x) = X φ(x)dp(x) which is an element in the reproducing kernel Hilbert space (RKHS) H of functions on X with reproducing kernel k. This approach has a number of useful properties. First, since H is an RKHS, by the Riesz representation theorem, there is a feature map φ(x) from X to ℜsuch that f(x) = f, φ(x) H = f, k(x, ) H when f belongs to H (Reed and Simon 1980; Steinwart and Christmann 2008). Note that this implies φ(x), φ(y) H = k(x, y), and in addition, Ex p[f(x)] = f, μp H. Second, for a class of kernels known as characteristic kernels, the mean map φ is injective, i.e. μp μq H = 0 if and only if p and q are the same distribution (p = q). Hence, the kernel mean embedding can be used to define a metric for probability distributions (Fukumizu, Bach, and Jordan 2004; Sriperumbudur et al. 2008). The maximum mean discrepancy (MMD) (Gretton et al. 2012) is an instance of the integral probability metric (IPM) (M uller 1997) defined with an RKHS: MMD[H, p, q] sup f H 1 f(x)dp(x) f(y)dq(y) = sup f H 1 μp μq, f H (6) = μp μq H Hence, MMD[H, p, q] = 0 if and only if p = q with a characteristic kernel. Given two sets of samples DX = {xi}N i=1 and DY = {yi}M i=1 from p and q respectively, a (biased) empirical estimate of the MMD is obtained by MMD 2[H, DX, DY ] = ˆμp ˆμq 2 H i=1 φ(xi) 1 j=1 φ(yj) 2 i =1 k(xi, xi ) + 1 M 2 j =1 k(yj, yj ) j=1 k(xi, yj) The function f that attains the supremum in Eqn. (6) is defined as the witness function. Since f μp μq, we have f (z) μp μq, φ(z) H = Ex p[k(x, z)] Ey q[k(y, z)] ˆf (z) ˆμp ˆμq, φ(z) H = 1 N i=1 k(xi, z) 1 j=1 k(yj, z) where ˆf is the empirical estimate of f , and the normalizing constant is μp μq H. Generative Moment Matching Imitation Learning The cost function class in the imitation learning objective (Eqn. (4)), i.e. linear functions on pre-defined features, is too restrictive. We want to learn with a more expressive class of cost functions, yet keep the optimization tractable. The main idea behind our approach is to reformulate the imitation learning objective using kernels. First, we assume the cost function class CH = c(s, a) = c, φ(s, a) c H 1 defined in RKHS H. Then, the objective function becomes ψ CH(π, πE) = sup c H 1 c(s, a)dρπ(s, a) c(s , a )dρπE(s , a ) = sup c H 1 μπ μπE, c H (7) which is exactly the MMD between two distributions ρπ and ρπE, i.e. MMD[H, ρπ, ρπE]. Hence, imitation learning naturally reduces to matching the state-action distribution of the agent to that of expert, in which we use the MMD as the metric between two distributions. This also yields an insight on how our approach relates to Generative Adversarial Imitation Learning (GAIL) (Ho and Ermon 2016). GAIL casts the minimax optimization problem in Eqn. (3) as the training objective of Generative Adversarial Networks (GANs). It was shown in the GAIL paper that, instead of assuming a specific cost function class to be searched, a sophisticated penalty function defined directly over generic cost functions leads to the objective function ψ GA(πθ, πE) = max d (0,1)S A Es,a ρπθ [log d(s, a)] + Es,a ρπE [log(1 d(s, a))] where d is a binary function defined over all possible states and actions. Hence, optimizing minθ ψ GA(πθ, πE) essentially becomes training a GAN, where the discriminator d(s, a) tries to discriminate between the trajectories from πθ and those from πE, and the generator is the policy πθ that tries to minimize the long-term cost with cost function c(s, a) = log d(s, a). It can be shown that the optimum of ψ GA(π, πE) is the Jensen-Shannon divergence between the occupancy measures of π and πE: JS(ρπ, ρπE) KL(ρπ (ρπ+ρπE)/2)+KL(ρπE (ρπ+ρπE)/2) (Goodfellow et al. 2014; Ho and Ermon 2016). In summary, assuming that we can obtain the optimum d inside ψ GA(π, πE), the optimization objective of GAIL becomes min θ JS(ρπθ, ρπE) where we are trying to match the state-action distribution of the agent to that of expert, in which the probability metric is Jensen-Shannon divergence, instead of MMD. Our reformulation of the imitation learning objective using kernels has three immediate advantages over GAIL. First, we have shown that the imitation learning can be reduced to matching two probability distributions, via a much simpler argument using kernel mean embedding. Second, we have avoided alternating minimax optimization in GAN, which is known to be prone to local convergence. In GAIL, the cost function is defined as the discriminator in GAN, which is not an exact optimum of Eqn. (8) but an approximation using a neural network. In contrast, the cost function in our approach is the witness function of MMD, which is the exact closed-form optimum of Eqn. (7). Lastly, and practically, we can monitor the progress via MMD, which indicates how well the policy is imitating. In contrast, the JS estimate from GANs is known to be not much meaningful. Our algorithm is shown in Algorithm 1, which we call generative moment matching imitation learning (GMMIL). We assume a multi-layer neural network for representing the policy, e.g. a Gaussian distribution with its mean and precision parameterized by neural networks that take raw state encodings as input, and use Trust Region Policy Optimiza- Algorithm 1: Generative Moment Matching Imitation Learning Input : Expert dataset of trajectories DπE = {(s E i , a E i )}M j=1 Initialize policy parameter θ for iter = 0, 1, . . . do Sample trajectories Dπθ = {(si, ai)}N i=1 by executing πθ Calculate the empirical estimate of the witness function ˆf (s, a) of MMD[Dπθ, DπE] for all (s, a) Dπθ: ˆf (s, a) = 1 i=1 k((si, ai), (s, a)) 1 j=1 k((s E j , a E j ), (s, a)) Update the policy parameter θ using the TRPO gradient update with cost function c (s, a) = ˆf (s, a). end tion (TRPO) (Schulman et al. 2015) for optimizing the policy with the witness function taken as the cost function. We remark that our approach closely resembles Generative Moment Matching Networks (GMMNs) (Li, Swersky, and Zemel 2015) and MMD nets (Dziugaite, Roy, and Ghahramani 2015), where a neural network generator is trained to minimize the MMD between the training instances and generated samples. In our case, the neural network generator is the imitation policy, and the training instances are the expert trajectories. Occupancy measure as sum of distributions Cautious readers may notice that the occupancy measure in Eqn. (1) is actually a discounted sum of probability distributions for each time step, rather than a stationary, unnormalized distribution independent of time steps. This is indeed true, and a simple re-calculation of the imitation loss (i.e. revised MMD) becomes ψ CH(π, πE) = MMD[H, ρπ, ρπE] = μπ μπE H μπ = t=0γt φ(s, a)dpπ(st = s, at = a) μπE = t=0γt φ(s, a)dpπE(st = s, at = a) Assuming that the lengths of trajectories are uniformly T and the trajectory data are split into time steps each containing N/T instances, i.e. D(t) π = {(si, ai)(t)}N/T i=1 and D(t) πE = {(s E j , a E j )(t)}N/T j=1 for t = 0, . . . , T, the empirical estimate becomes MMD 2[H, Dπ, DπE] = 1 (N/T )2 i,i k((si, ai)(t), (si , ai )(t )) i,j k((si, ai)(t), (s E j , a E j )(t )) + 1 (N/T )2 j,j k((s E j , a E j )(t), (s E j , a E j )(t )) which requires O(N 2) kernel evaluations. The evaluation of the cost function at a state-action pair requires O(N) kernel evaluations, the same computational requirement as in the stationary unnormalized distribution assumption. Our implementation takes the latter approach, as the implementation was much simpler and the performance difference was negligible. Kernel selection It is straightforward to see that the classical imitation learning in Eqn. (4) corresponds to using the linear kernel k((s, a), (s , a )) = φ(s, a) φ(s , a ), which makes μρπ = Eπ[ φ(s, a)], i.e. retaining the first moment of ρπ in the feature space φ. The polynomial kernel of degree d, given by k((s, a), (s , a )) = ( φ(s, a) φ(s , a ) + 1)d, retains moments up to d-th order. The Gaussian kernel, k((s, a), (s , a )) = exp( σ 1 [s, a] [s , a ] 2 2) where we simply concatenate the raw encodings of states and actions, is a well-known example of characteristic kernels, which ensures that ρπ = ρπE if and only if MMD[H, ρπ, ρπE] = 0. In our implementation, we used the sum of two Gaussian kernels with different bandwidth parameters {σ1, σ2} to span multiple ranges of data points, similar in spirit to (Li, Swersky, and Zemel 2015). However, rather than using the fixed values for the bandwidth parameters, we employed the median heuristics which is theoretically well justified (Sch olkopf and Smola 2002; Ramdas et al. 2015). The first bandwidth parameter σ1 was selected as the median of the pairwise squared-ℓ2 distances among the data points from the expert policy and from the initial policy, since the initial policy would be vastly different from the expert policy. The second bandwidth parameter σ2 was selected as the median of the pairwise squared-ℓ2 distances among the data points only from the expert policy, since the learned policy should be indistinguishable from the expert policy for a successful imitation. Although it remains as a future work to investigate the effectiveness of more sophisticated optimization of kernels, e.g. (Sutherland et al. 2017), we found that these two bandwidth parameters were sufficient for successful imitation in all tasks in our experiments. Multiple TRPO gradient updates per iteration Algorithm 1 performs single TRPO gradient update on πθ per iteration, in order to make a direct comparison with GAIL. However, it turns out that we can perform multiple TRPO gradient updates per iteration without re-generating trajectories, allowing for more sample efficient learning. In (Ho, Gupta, and Ermon 2016), it was shown that each TRPO step that improves πθ using the trajectories Dπ0 = {(si, ai)}N i=1 sampled by some other policy π0 can be formulated by minimizing the local approximation to the imitation learning objective 1 ψ C(πθ, πE) = max c C [Jc(πθ) Jc(πE)] max c C [Jc(π0) + Aπθ0 (πθ) Jc(πE)] ψC(θ) where Aπ0(πθ) Es ρπ0 Ea πθ( |s)[Aπ0(s, a)]. The key step in reusing the trajectories is to re-formulate the objective using importance sampling ψC(θ) = max c C Eρπ0 [c(s, a)] EρπE [c(s, a)] π0(a|s)(Qπ0 c (s, a) V π0 c (s)) = max c C Eρπ0 [c(s, a)] EρπE [c(s, a)] π0(a|s) 1 Qπ0 c (s, a) Assuming the cost function class CH, and introducing notations μπ0 Eρπ0 [φ(s, a)], μπE EρπE [φ(s, a)], and πθ(a|s) π0(a|s) 1 φ(s, a) , we have ψC(θ) = sup c H 1 μπ0 μπE + μ πθ, c H which results in the closed-form optimal cost function c (s, a) = μπ0 μπE + μ πθ, φ(s, a) H μπ0 μπE + μ πθ H The empirical estimate of the numerator in the optimal cost function is obtained by ˆc (s, a) 1 N N i=1 k((si, ai), (s, a)) 1 M N j=1 k((s E j , a E j ), (s, a)) N N i=1 πθ(ai|si) π0(ai|si) 1 k((si, ai), (s, a)) Thus, we can perform multiple TRPO gradient updates using the trajectories gathered initially at each iteration. We can similarly use the importance sampling to perform multiple updates in GAIL, but this often makes the algorithm unstable as we show in the experiments. Experiments We evaluated GMMIL against other imitation learning algorithms on 9 control tasks in the Open AI Gym (Brockman et al. 2016). The control tasks include 3 low-dimensional classic control tasks (Cartpole, Acrobot, and Mountain Car), 1The trust region constraint is not relevant to the discussion. and 6 high-dimensional robotic control tasks (Reacher, Half Cheetah, Hopper, Walker, Ant, and Humanoid) that use the Mu Jo Co simulator (Todorov, Erez, and Tassa 2012). The largest control task, i.e. Humanoid, has 376 observation and 17 action dimensions. The imitation learning algorithms that we evaluated against are: Behavioral Cloning that uses supervised learning; feature expectation matching (FEM) which corresponds to the cost function class Cℓ2 (Abbeel and Ng 2004); game-theoretic apprenticeship learning (GTAL) which corresponds to the cost function class CΔ (Syed and Schapire 2007); and Generative Adversarial Imitation Learning (GAIL) (Ho and Ermon 2016). All the algorithms used raw state and action encoding as features, and we mostly leveraged the GAIL source code2 for implementing GMMIL and conducting experiments. For fair comparison, we used the same experimental settings as in (Ho and Ermon 2016), including the exactly same neural network architectures for the policies and the optimizer parameters for TRPO. Fig. 1 reports the performance of each imitation algorithm, under varying numbers of expert trajectories (the tabularization of the results are provided in the supplementary material). The results show that while algorithms such as behavioral cloning, FEM and GTAL suffered from poor performance, especially in high-dimensional Mu Jo Co tasks, GAIL and GMMIL attain near-expert performance. In particular, as observed in Reacher, Walker, and Ant tasks, the performance of GAIL significantly degraded when the expert trajectory is scarce. In contrast, GMMIL did not show any significant performance degradation. We suspect that the performance degradation of GAIL is due to a number of issues related to training GANs. First, scarce expert trajectory poses severe label imbalance for training the discriminator. The original implementation of GAIL simply scales the instance weights to mitigate the issue, but it seems that this is not enough. Second, the support overlap of two distributions may become so small that the gradient of the imitation policy almost vanishes. This is known to be the problem associated with using the Jensen Shannon divergence as the probability metric (Husz ar 2015; Nowozin, Cseke, and Tomioka 2016; Arjovsky, Chintala, and Bottou 2017). Although GAIL could be improved via more recent work on GANs that addresses this issue, it is beyond the scope of this paper. Fig. 2 shows how the discriminator (i.e. cost function) learning rate in GAIL affects learning the imitation policy. We report the performance of GAIL learned with the largest number of expert trajectories under varying learning rate. We can clearly observe that too large or too small learning rates often results in sub-optimal performance. On the other hand, GMMIL does not involve any learning of the cost function, but obtained directly from MMD. This figure shows another practical advantage of using GMMIL. Fig. 3 compares how quickly GMMIL and GAIL learn to imitate the expert policy, measured in terms of the return attained by the intermediate policies at each iteration. Each iteration amounts to sampling a set of trajectories from 2https://github.com/openai/imitation 0.0 0.2 0.4 0.6 0.8 1.0 Number of trajectories in dataset Performance (scaled) Behavioral cloning FEM GTAL GAIL GMMIL Mountain Car Half Cheetah Behavioral cloning GAIL(λ = 0) GAIL(λ = 10 3) GAIL(λ = 10 2) GMMIL(λ = 0) Figure 1: Performance of the learned policy with varying amount of expert data. The x-axis is the number of expert trajectories used for learning and the y-axis is the return of imitation policy, normalized so that the return of the expert policy is 1 and that of the random policy is 0. λ is the penalty weight on the causal entropy of the policy, used in GAIL (Ho and Ermon 2016). 0.0 0.2 0.4 0.6 0.8 1.0 Reward learning rate Performance (scaled) 100 10 1 10 2 10 3 10 4 1.0 Reacher 100 10 1 10 2 10 3 10 4 0.0 1.0 Half Cheetah 100 10 1 10 2 10 3 10 4 0.0 100 10 1 10 2 10 3 10 4 0.0 Figure 2: Performance evaluations of GAIL with different learning rates for the cost function. We used the settings with the largest number of expert trajectories, i.e. 18 for reacher; 25 for half cheetah, hopper and walker tasks. the environment and performing a TRPO gradient update. Again, we show the results with the largest number of expert trajectories, but the results were similar for other settings. We omitted the results from classic control tasks since they were too easy for both algorithms. Note that in most of the tasks, GMMIL converges in fewer iterations than GAIL, exhibiting that GMMIL is more sample-efficient than GAIL requiring fewer samples from the environment. In terms of the computation time, evaluating the witness function on N samples takes O(N 2), but our GPU-based implementation takes overall time on par with GAIL. Fig. 4 compares the convergence rates of GMMIL and GAIL, both performing 5 TRPO gradient updates per iteration using importance sampling to reuse the trajectories, gathered at the beginning of each iteration. We stress that importance sampling does not in principle guarantee improving sample efficiency, as its effectiveness depends on a number of factors, such as the variance of importance weights. The figure shows the results on higher dimensional tasks without tuning the parameters. GMMIL always showed stable convergence, and it was slower than the single update method in only one of the tasks (i.e. ant). In contrast, GAIL failed to show meaningful training progress in many of the tasks. This also highlights the advantage of using the MMD instead of GAN, since the best cost function is found exactly in each iteration rather than gradually fit using a neural network. Summary and Future Work In this paper, we presented GMMIL, a simple yet effective imitation learning algorithm that is essentially a kernelized version of the classical apprenticeship learning algorithm (Abbeel and Ng 2004). Compared to GAIL, our approach provides an alternative reduction of imitation learning to a distribution matching problem, with a much simpler argument. It is shown that the training objective is to minimize the loss measured by the MMD between the occupancy measures of the learned policy and the expert policy. This allows our approach to avoid the hard minimax optimization of GAN inherent to training in GAIL, which results in more robust and sample-efficient imitation learning. As an end result, our approach becomes an imitation learning version of GMMNs (Li, Swersky, and Zemel 2015) and MMD nets (Dziugaite, Roy, and Ghahramani 2015). Through an extensive set of experiments on high- 0.0 0.2 0.4 0.6 0.8 1.0 Number of iterations Performance (scaled) 0 50 100 150 200 0 100 200 300 400 500 1.0 Half Cheetah 0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 0 200 400 600 800 Figure 3: Performance of the learned policy during training iterations, from GMMIL (red) and GAIL (blue). The x-axis is the iteration, and the y-axis is the scaled return of the policy. We used the settings with the largest number of expert trajectories, i.e. 25 for half cheetah, hopper, walker, and ant; 240 for humanoid tasks. These results are from single TRPO gradient update per iteration. 0.0 0.2 0.4 0.6 0.8 1.0 Number of iterations Performance (scaled) 0 20 40 60 80 100 0 20 40 60 80 100 0.2 1.0 Half Cheetah 0 20 40 60 80 100 0 20 40 60 80 100 0 50 100 150 200 250 300 0 100 200 300 400 0.8 Humanoid Figure 4: Performance of the learned policy during training iterations using importance sampling to perform 5 TRPO gradient updates per iteration, both for GMMIL (red) and GAIL (blue). Note that the x-axis scales are different from Fig. 3. dimensional robotic imitation tasks with up to 376 state variables and 17 action variables (i.e. Humanoid), we showed that GMMIL successfully imitates expert policies, even when the expert trajectory was scarcely provided. The returns obtained by the learned policy exhibited lower variances, hinting that using MMD makes the overall optimization much more stable compared to the minimax optimization in GAIL. In addition, we showed that we can naturally reuse the trajectories by importance sampling, allowing for further improving the sample efficiency. As for the future work, we would like to address many aspects in which our formulation could be improved. First of all, it is well known that the test power of MMD degrades with the dimensionality of the data (Ramdas et al. 2015). Although we did not suffer from this issue in our experiments, this could be true with visual domains. Second, even though TRPO was mostly robust to the variance of the cost function, we still observed some cases where this was somewhat problematic in the last few iterations, both for GAIL and GMMIL. It would be interesting to develop a principled scheme for the cost function that warrants a stable convergence of policy search algorithms. Acknowledgements Kee-Eung Kim is supported by IITP/MSIT (2017-0-01778) and DAPA/ADD via KAIST HSVRC. Hyun Soo Park is supported by Mn Drive Robotics, Sensing, and Advanced Manufacturing and Oculus/Facebook Research. Abbeel, P., and Ng, A. Y. 2004. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the 21st International Conference on Machine Learning. Arjovsky, M.; Chintala, S.; and Bottou, L. 2017. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning. Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. 2016. Open AI Gym. Dziugaite, G. K.; Roy, D. M.; and Ghahramani, Z. 2015. Training generative neural networks via maximum mean discrepancy optimization. In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, 258 267. Fukumizu, K.; Bach, F. R.; and Jordan, M. I. 2004. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. Journal of Machine Learning Research 5:73 99. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. Gretton, A.; Borgwardt, K. M.; Rasch, M. J.; Sch olkopf, B.; and Smola, A. 2012. A kernel two-sample test. Journal of Machine Learning Research 13:723 773. Ho, J., and Ermon, S. 2016. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems. Ho, J.; Gupta, J.; and Ermon, S. 2016. Model-free imitation learning with policy optimization. In Proceedings of the 33rd International Conference on Machine Learning. Husz ar, F. 2015. How (not) to train your generative model: Scheduled sampling, likelihood, adversary? ar Xiv preprint ar Xiv:1511.05101. Li, Y.; Swersky, K.; and Zemel, R. S. 2015. Generative moment matching networks. In Proceedings of the 32nd International Conference on Machine Learning. M uller, A. 1997. Integral probability metrics and their generating classes of functions. Advances in Applied Probability 29:429 443. Ng, A. Y.; Russell, S. J.; et al. 2000. Algorithms for inverse reinforcement learning. In Proceedings of the 17th International Conference on Machine Learning. Nowozin, S.; Cseke, B.; and Tomioka, R. 2016. f-GAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems. Puterman, M. L. 1994. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons. Ramdas, A.; Reddi, S. J.; P oczos, B.; Singh, A.; and Wasserman, L. A. 2015. On the decreasing power of kernel and distance based nonparametric hypothesis tests in high dimensions. In Proceedings of the 29th AAAI Conference on Artificial Intelligence. Reed, M., and Simon, B. 1980. Methods of modern mathematical physics. Functional analysis, volume 1. Academic. Russell, S. 1998. Learning agents for uncertain environments. In Proceedings of the 11th Annual Conference on Computational Learning Theory. Sammut, C. 2010. Behavioral cloning. In Sammut, C., and Webb, G. I., eds., Encyclopedia of Machine Learning. Springer US. 93 97. Sch olkopf, B., and Smola, A. J. 2002. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press. Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M. I.; and Moritz, P. 2015. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning. Smola, A.; Gretton, A.; Song, L.; and Sch olkopf, B. 2007. A Hilbert space embedding for distributions. In International Conference on Algorithmic Learning Theory. Springer. Sriperumbudur, B. K.; Gretton, A.; Fukumizu, K.; Lanckriet, G.; and Sch olkopf, B. 2008. Injective Hilbert space embeddings of probability measures. In Proceedings of the 20th Annual Conference on Computational Learning Theory. Steinwart, I., and Christmann, A. 2008. Support vector machines. Springer Science & Business Media. Sutherland, D. J.; Tung, H.-Y.; Strathmann, H.; De, S.; Ramdas, A.; Smola, A.; and Gretton, A. 2017. Generate Models and Model Criticism via Optimized Maximum Mean Discrepancy. In Proceedings of the 5th International Conference on Learning Representations. Syed, U., and Schapire, R. E. 2007. A game-theoretic approach to apprenticeship learning. In Advances in Neural Information Processing Systems. Todorov, E.; Erez, T.; and Tassa, Y. 2012. Mujoco: A physics engine for model-based control. 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems 5026 5033.