# convolutional_kernel_networks__4af9ab4c.pdf Convolutional Kernel Networks Julien Mairal, Piotr Koniusz, Zaid Harchaoui, and Cordelia Schmid Inria firstname.lastname@inria.fr An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art. 1 Introduction We have recently seen a revival of attention given to convolutional neural networks (CNNs) [22] due to their high performance for large-scale visual recognition tasks [15, 21, 30]. The architecture of CNNs is relatively simple and consists of successive layers organized in a hierarchical fashion; each layer involves convolutions with learned filters followed by a pointwise non-linearity and a downsampling operation called feature pooling . The resulting image representation has been empirically observed to be invariant to image perturbations and to encode complex visual patterns [33], which are useful properties for visual recognition. Training CNNs remains however difficult since high-capacity networks may involve billions of parameters to learn, which requires both high computational power, e.g., GPUs, and appropriate regularization techniques [18, 21, 30]. The exact nature of invariance that CNNs exhibit is also not precisely understood. Only recently, the invariance of related architectures has been characterized; this is the case for the wavelet scattering transform [8] or the hierarchical models of [7]. Our work revisits convolutional neural networks, but we adopt a significantly different approach than the traditional one. Indeed, we use kernels [26], which are natural tools to model invariance [14]. Inspired by the hierarchical kernel descriptors of [2], we propose a reproducing kernel that produces multi-layer image representations. Our main contribution is an approximation scheme called convolutional kernel network (CKN) to make the kernel approach computationally feasible. Our approach is a new type of unsupervised convolutional neural network that is trained to approximate the kernel map. Interestingly, our network uses non-linear functions that resemble rectified linear units [1, 30], even though they were not handcrafted and naturally emerge from an approximation scheme of the Gaussian kernel map. By bridging a gap between kernel methods and neural networks, we believe that we are opening a fruitful research direction for the future. Our network is learned without supervision since the LEAR team, Inria Grenoble, Laboratoire Jean Kuntzmann, CNRS, Univ. Grenoble Alpes, France. label information is only used subsequently in a support vector machine (SVM). Yet, we achieve competitive results on several datasets such as MNIST [22], CIFAR-10 [20] and STL-10 [13] with simple architectures, few parameters to learn, and no data augmentation. Open-source code for learning our convolutional kernel networks is available on the first author s webpage. 1.1 Related Work There have been several attempts to build kernel-based methods that mimic deep neural networks; we only review here the ones that are most related to our approach. Arc-cosine kernels. Kernels for building deep large-margin classifiers have been introduced in [10]. The multilayer arc-cosine kernel is built by successive kernel compositions, and each layer relies on an integral representation. Similarly, our kernels rely on an integral representation, and enjoy a multilayer construction. However, in contrast to arc-cosine kernels: (i) we build our sequence of kernels by convolutions, using local information over spatial neighborhoods (as opposed to compositions, using global information); (ii) we propose a new training procedure for learning a compact representation of the kernel in a data-dependent manner. Multilayer derived kernels. Kernels with invariance properties for visual recognition have been proposed in [7]. Such kernels are built with a parameterized neural response function, which consists in computing the maximal response of a base kernel over a local neighborhood. Multiple layers are then built by iteratively renormalizing the response kernels and pooling using neural response functions. Learning is performed by plugging the obtained kernel in an SVM. In contrast to [7], we propagate information up, from lower to upper layers, by using sequences of convolutions. Furthermore, we propose a simple and effective data-dependent way to learn a compact representation of our kernels and show that we obtain near state-of-the-art performance on several benchmarks. Hierarchical kernel descriptors. The kernels proposed in [2, 3] produce multilayer image representations for visual recognition tasks. We discuss in details these kernels in the next section: our paper generalizes them and establishes a strong link with convolutional neural networks. 2 Convolutional Multilayer Kernels The convolutional multilayer kernel is a generalization of the hierarchical kernel descriptors introduced in computer vision [2, 3]. The kernel produces a sequence of image representations that are built on top of each other in a multilayer fashion. Each layer can be interpreted as a non-linear transformation of the previous one with additional spatial invariance. We call these layers image feature maps1, and formally define them as follows: Definition 1. An image feature map ϕ is a function ϕ : Ω H, where Ωis a (usually discrete) subset of [0, 1]d representing normalized coordinates in the image and H is a Hilbert space. For all practical examples in this paper, Ωis a two-dimensional grid and corresponds to different locations in a two-dimensional image. In other words, Ωis a set of pixel coordinates. Given z in Ω, the point ϕ(z) represents some characteristics of the image at location z, or in a neighborhood of z. For instance, a color image of size m n with three channels, red, green, and blue, may be represented by an initial feature map ϕ0 : Ω0 H0, where Ω0 is an m n regular grid, H0 is the Euclidean space R3, and ϕ0 provides the color pixel values. With the multilayer scheme, non-trivial feature maps will be obtained subsequently, which will encode more complex image characteristics. With this terminology in hand, we now introduce the convolutional kernel, first, for a single layer. Definition 2 (Convolutional Kernel with Single Layer). Let us consider two images represented by two image feature maps, respectively ϕ and ϕ : Ω H, where Ωis a set of pixel locations, and H is a Hilbert space. The one-layer convolutional kernel between ϕ and ϕ is defined as K(ϕ, ϕ ) := X z Ω ϕ(z) H ϕ (z ) H e 1 2β2 z z 2 2e 1 2σ2 ϕ(z) ϕ (z ) 2 1In the kernel literature, feature map denotes the mapping between data points and their representation in a reproducing kernel Hilbert space (RKHS) [26]. Here, feature maps refer to spatial maps representing local image characteristics at everly location, as usual in the neural network literature [22]. where β and σ are smoothing parameters of Gaussian kernels, and ϕ(z) := (1/ ϕ(z) H) ϕ(z) if ϕ(z) = 0 and ϕ(z) = 0 otherwise. Similarly, ϕ (z ) is a normalized version of ϕ (z ).2 It is easy to show that the kernel K is positive definite (see Appendix A). It consists of a sum of pairwise comparisons between the image features ϕ(z) and ϕ (z ) computed at all spatial locations z and z in Ω. To be significant in the sum, a comparison needs the corresponding z and z to be close in Ω, and the normalized features ϕ(z) and ϕ (z ) to be close in the feature space H. The parameters β and σ respectively control these two definitions of closeness . Indeed, when β is large, the kernel K is invariant to the positions z and z but when β is small, only features placed at the same location z = z are compared to each other. Therefore, the role of β is to control how much the kernel is locally shift-invariant. Next, we will show how to go beyond one single layer, but before that, we present concrete examples of simple input feature maps ϕ0 : Ω0 H0. Gradient map. Assume that H0 =R2 and that ϕ0(z) provides the two-dimensional gradient of the image at pixel z, which is often computed with first-order differences along each dimension. Then, the quantity ϕ0(z) H0 is the gradient intensity, and ϕ0(z) is its orientation, which can be characterized by a particular angle that is, there exists θ in [0; 2π] such that ϕ0(z) = [cos(θ), sin(θ)]. The resulting kernel K is exactly the kernel descriptor introduced in [2, 3] for natural image patches. Patch map. In that setting, ϕ0 associates to a location z an image patch of size m m centered at z. Then, the space H0 is simply Rm m, and ϕ0(z) is a contrast-normalized version of the patch, which is a useful transformation for visual recognition according to classical findings in computer vision [19]. When the image is encoded with three color channels, patches are of size m m 3. We now define the multilayer convolutional kernel, generalizing some ideas of [2]. Definition 3 (Multilayer Convolutional Kernel). Let us consider a set Ωk 1 [0, 1]d and a Hilbert space Hk 1. We build a new set Ωk and a new Hilbert space Hk as follows: (i) choose a patch shape Pk defined as a bounded symmetric subset of [ 1, 1]d, and a set of coordinates Ωk such that for all location zk in Ωk, the patch {zk} + Pk is a subset of Ωk 1;3 In other words, each coordinate zk in Ωk corresponds to a valid patch in Ωk 1 centered at zk. (ii) define the convolutional kernel Kk on the patch feature maps Pk Hk 1, by replacing in (1): Ωby Pk, H by Hk 1, and σ, β by appropriate smoothing parameters σk, βk. We denote by Hk the Hilbert space for which the positive definite kernel Kk is reproducing. An image represented by a feature map ϕk 1 : Ωk 1 Hk 1 at layer k 1 is now encoded in the k-th layer as ϕk : Ωk Hk, where for all zk in Ωk, ϕk(zk) is the representation in Hk of the patch feature map z 7 ϕk 1(zk + z) for z in Pk. Concretely, the kernel Kk between two patches of ϕk 1 and ϕ k 1 at respective locations zk and z k is X z Pk ϕk 1(zk + z) ϕ k 1(z k + z ) e 1 2β2 k z z 2 2e 1 2σ2 k ϕk 1(zk+z) ϕ k 1(z k+z ) 2 , (2) where . is the Hilbertian norm of Hk 1. In Figure 1(a), we illustrate the interactions between the sets of coordinates Ωk, patches Pk, and feature spaces Hk across layers. For two-dimensional grids, a typical patch shape is a square, for example P := { 1/n, 0, 1/n} { 1/n, 0, 1/n} for a 3 3 patch in an image of size n n. Information encoded in the k-th layer differs from the (k 1)-th one in two aspects: first, each point ϕk(zk) in layer k contains information about several points from the (k 1)-th layer and can possibly represent larger patterns; second, the new feature map is more locally shift-invariant than the previous one due to the term involving the parameter βk in (2). The multilayer convolutional kernel slightly differs from the hierarchical kernel descriptors of [2] but exploits similar ideas. Bo et al. [2] define indeed several ad hoc kernels for representing local information in images, such as gradient, color, or shape. These kernels are close to the one defined in (1) but with a few variations. Some of them do not use normalized features ϕ(z), and these kernels use different weighting strategies for the summands of (1) that are specialized to the image modality, e.g., color, or gradient, whereas we use the same weight ϕ(z) H ϕ (z ) H for all kernels. The generic formulation (1) that we propose may be useful per se, but our main contribution comes in the next section, where we use the kernel as a new tool for learning convolutional neural networks. 2When Ωis not discrete, the notation P in (1) should be replaced by the Lebesgue integral R in the paper. 3For two sets A and B, the Minkowski sum A + B is defined as {a + b : a A, b B}. Ω0 ϕ0(z0) H0 ϕ1(z1) H1 Ω1 (a) Hierarchy of image feature maps. Ω k 1 ξk 1(z) ψk 1(zk 1) (patch extraction) {zk 1}+P k 1 convolution + non-linearity pk ζk(zk 1) Gaussian filtering + downsampling = pooling Ω k (b) Zoom between layer k 1 and k of the CKN. Figure 1: Left: concrete representation of the successive layers for the multilayer convolutional kernel. Right: one layer of the convolutional neural network that approximates the kernel. 3 Training Invariant Convolutional Kernel Networks Generic schemes have been proposed for approximating a non-linear kernel with a linear one, such as the Nystr om method and its variants [5, 31], or random sampling techniques in the Fourier domain for shift-invariant kernels [24]. In the context of convolutional multilayer kernels, such an approximation is critical because computing the full kernel matrix on a database of images is computationally infeasible, even for a moderate number of images ( 10 000) and moderate number of layers. For this reason, Bo et al. [2] use the Nystr om method for their hierarchical kernel descriptors. In this section, we show that when the coordinate sets Ωk are two-dimensional regular grids, a natural approximation for the multilayer convolutional kernel consists of a sequence of spatial convolutions with learned filters, pointwise non-linearities, and pooling operations, as illustrated in Figure 1(b). More precisely, our scheme approximates the kernel map of K defined in (1) at layer k by finite-dimensional spatial maps ξk : Ω k Rpk, where Ω k is a set of coordinates related to Ωk, and pk is a positive integer controlling the quality of the approximation. Consider indeed two images represented at layer k by image feature maps ϕk and ϕ k, respectively. Then, (A) the corresponding maps ξk and ξ k are learned such that K(ϕk 1, ϕ k 1) ξk, ξ k , where ., . is the Euclidean inner-product acting as if ξk and ξ k were vectors in R|Ω k|pk; (B) the set Ω k is linked to Ωk by the relation Ω k = Ωk + P k where P k is a patch shape, and the quantities ϕk(zk) in Hk admit finite-dimensional approximations ψk(zk) in R|P k|pk; as illustrated in Figure 1(b), ψk(zk) is a patch from ξk centered at location zk with shape P k; (C) an activation map ζk : Ωk 1 7 Rpk is computed from ξk 1 by convolution with pk filters followed by a non-linearity. The subsequent map ξk is obtained from ζk by a pooling operation. We call this approximation scheme a convolutional kernel network (CKN). In comparison to CNNs, our approach enjoys similar benefits such as efficient prediction at test time, and involves the same set of hyper-parameters: number of layers, numbers of filters pk at layer k, shape P k of the filters, sizes of the feature maps. The other parameters βk, σk can be automatically chosen, as discussed later. Training a CKN can be argued to be as simple as training a CNN in an unsupervised manner [25] since we will show that the main difference is in the cost function that is optimized. 3.1 Fast Approximation of the Gaussian Kernel A key component of our formulation is the Gaussian kernel. We start by approximating it by a linear operation with learned filters followed by a pointwise non-linearity. Our starting point is the next lemma, which can be obtained after a simple calculation. Lemma 1 (Linear expansion of the Gaussian Kernel). For all x and x in Rm, and σ > 0, e 1 2σ2 x x 2 2 = 2 σ2 x w 2 2e 1 σ2 x w 2 2dw. (3) The lemma gives us a mapping of any x in Rm to the function w 7 Ce (1/σ2) x w 2 2 in L2(Rm), where the kernel is linear, and C is the constant in front of the integral. To obtain a finite-dimensional representation, we need to approximate the integral with a weighted finite sum, which is a classical problem arising in statistics (see [29] and chapter 8 of [6]). Then, we consider two different cases. Small dimension, m 2. When the data lives in a compact set of Rm, the integral in (3) can be approximated by uniform sampling over a large enough set. We choose such a strategy for two types of kernels from Eq. (1): (i) the spatial kernels e 1 2β2 z z 2 2; (ii) the terms e ( 1 2σ2 ) ϕ(z) ϕ (z ) 2 H when ϕ is the gradient map presented in Section 2. In the latter case, H = R2 and ϕ(z) is the gradient orientation. We typically sample a few orientations as explained in Section 4. Higher dimensions. To prevent the curse of dimensionality, we learn to approximate the kernel on training data, which is intrinsically low-dimensional. We optimize importance weights η = [ηl]p l=1 in Rp + and sampling points W = [wl]p l=1 in Rm p on n training pairs (xi, yi)i=1,...,n in Rm Rm: min η Rp +,W Rm p e 1 2σ2 xi yi 2 2 σ2 xi wl 2 2e 1 σ2 yi wl 2 2 2 . (4) Interestingly, we may already draw some links with neural networks. When applied to unit-norm vectors xi and yi, problem (4) produces sampling points wl whose norm is close to one. After learning, a new unit-norm point x in Rm is mapped to the vector [ ηle (1/σ2) x wl 2 2]p l=1 in Rp, which may be written as [f(w l x)]p l=1, assuming that the norm of wl is always one, where f is the function u 7 e(2/σ2)(u 1) for u = w l x in [ 1, 1]. Therefore, the finite-dimensional representation of x only involves a linear operation followed by a non-linearity, as in typical neural networks. In Figure 2, we show that the shape of f resembles the rectified linear unit function [30]. f(u) f(u) = e(2/σ2)(u 1) f(u) = max(u, 0) Figure 2: In dotted red, we plot the rectified linear unit function u 7 max(u, 0). In blue, we plot non-linear functions of our network for typical values of σ that we use in our experiments. 3.2 Approximating the Multilayer Convolutional Kernel We have now all the tools in hand to build our convolutional kernel network. We start by making assumptions on the input data, and then present the learning scheme and its approximation principles. The zeroth layer. We assume that the input data is a finite-dimensional map ξ0 : Ω 0 Rp0, and that ϕ0 : Ω0 H0 extracts patches from ξ0. Formally, there exists a patch shape P 0 such that Ω 0 = Ω0 + P 0, H0 = Rp0|P 0|, and for all z0 in Ω0, ϕ0(z0) is a patch of ξ0 centered at z0. Then, property (B) described at the beginning of Section 3 is satisfied for k = 0 by choosing ψ0 = ϕ0. The examples of input feature maps given earlier satisfy this finite-dimensional assumption: for the gradient map, ξ0 is the gradient of the image along each direction, with p0 = 2, P 0 = {0} is a 1 1 patch, Ω0 =Ω 0, and ϕ0 =ξ0; for the patch map, ξ0 is the input image, say with p0 =3 for RGB data. The convolutional kernel network. The zeroth layer being characterized, we present in Algorithms 1 and 2 the subsequent layers and how to learn their parameters in a feedforward manner. It is interesting to note that the input parameters of the algorithm are exactly the same as a CNN that is, number of layers and filters, sizes of the patches and feature maps (obtained here via the subsampling factor). Ultimately, CNNs and CKNs only differ in the cost function that is optimized for learning the filters and in the choice of non-linearities. As we show next, there exists a link between the parameters of a CKN and those of a convolutional multilayer kernel. Algorithm 1 Convolutional kernel network - learning the parameters of the k-th layer. input ξ1 k 1, ξ2 k 1, . . . : Ω k 1 Rpk 1 (sequence of (k 1)-th maps obtained from training images); P k 1 (patch shape); pk (number of filters); n (number of training pairs); 1: extract at random n pairs (xi, yi) of patches with shape P k 1 from the maps ξ1 k 1, ξ2 k 1, . . .; 2: if not provided by the user, set σk to the 0.1 quantile of the data ( xi yi 2)n i=1; 3: unsupervised learning: optimize (4) to obtain the filters Wk in R|P k 1|pk 1 pk and ηk in Rpk; output Wk, ηk, and σk (smoothing parameter); Algorithm 2 Convolutional kernel network - computing the k-th map form the (k 1)-th one. input ξk 1 : Ω k 1 Rpk 1 (input map); P k 1 (patch shape); γk 1 (subsampling factor); pk (number of filters); σk (smoothing parameter); Wk = [wkl]pk l=1 and ηk = [ηkl]pk l=1 (layer parameters); 1: convolution and non-linearity: define the activation map ζk : Ωk 1 Rpk as ζk : z 7 ψk 1(z) 2 σ2 k ψk 1(z) wkl 2 where ψk 1(z) is a vector representing a patch from ξk 1 centered at z with shape P k 1, and the vector ψk 1(z) is an ℓ2-normalized version of ψk 1(z). This operation can be interpreted as a spatial convolution of the map ξk 1 with the filters wkl followed by pointwise non-linearities; 2: set βk to be γk times the spacing between two pixels in Ωk 1; 3: feature pooling: Ω k is obtained by subsampling Ωk 1 by a factor γk and we define a new map ξk : Ω k Rpk obtained from ζk by linear pooling with Gaussian weights: β2 k u z 2 2ζk(u). (6) output ξk : Ω k Rpk (new map); Approximation principles. We proceed recursively to show that the kernel approximation property (A) is satisfied; we assume that (B) holds at layer k 1, and then, we show that (A) and (B) also hold at layer k. This is sufficient for our purpose since we have previously assumed (B) for the zeroth layer. Given two images feature maps ϕk 1 and ϕ k 1, we start by approximating K(ϕk 1, ϕ k 1) by replacing ϕk 1(z) and ϕ k 1(z ) by their finite-dimensional approximations provided by (B): K(ϕk 1, ϕ k 1) X z,z Ωk 1 ψk 1(z) 2 ψ k 1(z ) 2 e 1 2β2 k z z 2 2e 1 2σ2 k ψk 1(z) ψ k 1(z ) 2 Then, we use the finite-dimensional approximation of the Gaussian kernel involving σk and K(ϕk 1, ϕ k 1) X z,z Ωk 1 ζk(z) ζ k(z )e 1 2β2 k z z 2 where ζk is defined in (5) and ζ k is defined similarly by replacing ψ by ψ . Finally, we approximate the remaining Gaussian kernel by uniform sampling on Ω k, following Section 3.1. After exchanging sums and grouping appropriate terms together, we obtain the new approximation K(ϕk 1, ϕ k 1) 2 β2 k z u 2 2ζk(z) X 2ζ k(z ) , (9) where the constant 2/π comes from the multiplication of the constant 2/(πβ2 k) from (3) and the weight β2 k of uniform sampling orresponding to the square of the distance between two pixels of Ω k.4 As a result, the right-hand side is exactly ξk, ξ k , where ξk is defined in (6), giving us property (A). It remains to show that property (B) also holds, specifically that the quantity (2) can be approximated by the Euclidean inner-product ψk(zk), ψ k(z k) with the patches ψk(zk) and ψ k(z k) of shape P k; we assume for that purpose that P k is a subsampled version of the patch shape Pk by a factor γk. 4The choice of βk in Algorithm 2 is driven by signal processing principles. The feature pooling step can indeed be interpreted as a downsampling operation that reduces the resolution of the map from Ωk 1 to Ωk by using a Gaussian anti-aliasing filter, whose role is to reduce frequencies above the Nyquist limit. We remark that the kernel (2) is the same as (1) applied to layer k 1 by replacing Ωk 1 by {zk}+Pk. By doing the same substitution in (9), we immediately obtain an approximation of (2). Then, all Gaussian terms are negligible for all u and z that are far from each other say when u z 2 2βk. Thus, we may replace the sums P z,z {zk}+Pk by P u {zk}+P k P z,z Ωk 1, which has the same set of non-negligible terms. This yields exactly the approximation ψk(zk), ψ k(z k) . Optimization. Regarding problem (4), stochastic gradient descent (SGD) may be used since a potentially infinite amount of training data is available. However, we have preferred to use L-BFGSB [9] on 300 000 pairs of randomly selected training data points, and initialize W with the K-means algorithm. L-BFGS-B is a parameter-free state-of-the-art batch method, which is not as fast as SGD but much easier to use. We always run the L-BFGS-B algorithm for 4 000 iterations, which seems to ensure convergence to a stationary point. Our goal is to demonstrate the preliminary performance of a new type of convolutional network, and we leave as future work any speed improvement. 4 Experiments We now present experiments that were performed using Matlab and an L-BFGS-B solver [9] interfaced by Stephen Becker. Each image is represented by the last map ξk of the CKN, which is used in a linear SVM implemented in the software package Lib Linear [16]. These representations are centered, rescaled to have unit ℓ2-norm on average, and the regularization parameter of the SVM is always selected on a validation set or by 5-fold cross-validation in the range 2i, i = 15 . . . , 15. The patches P k are typically small; we tried the sizes m m with m = 3, 4, 5 for the first layer, and m = 2, 3 for the upper ones. The number of filters pk in our experiments is in the set {50, 100, 200, 400, 800}. The downsampling factor γk is always chosen to be 2 between two consecutive layers, whereas the last layer is downsampled to produce final maps ξk of a small size say, 5 5 or 4 4. For the gradient map ϕ0, we approximate the Gaussian kernel e(1/σ2 1) ϕ0(z) ϕ 0(z ) H0 by uniformly sampling p1 = 12 orientations, setting σ1 = 2π/p1. Finally, we also use a small offset ε to prevent numerical instabilities in the normalization steps ψ(z) = ψ(z)/ max( ψ(z) 2, ε). 4.1 Discovering the Structure of Natural Image Patches Unsupervised learning was first used for discovering the underlying structure of natural image patches by Olshausen and Field [23]. Without making any a priori assumption about the data except a parsimony principle, the method is able to produce small prototypes that resemble Gabor wavelets that is, spatially localized oriented basis functions. The results were found impressive by the scientific community and their work received substantial attention. It is also known that such results can also be achieved with CNNs [25]. We show in this section that this is also the case for convolutional kernel networks, even though they are not explicitly trained to reconstruct data. Following [23], we randomly select a database of 300 000 whitened natural image patches of size 12 12 and learn p = 256 filters W using the formulation (4). We initialize W with Gaussian random noise without performing the K-means step, in order to ensure that the output we obtain is not an artifact of the initialization. In Figure 3, we display the filters associated to the top-128 largest weights ηl. Among the 256 filters, 197 exhibit interpretable Gabor-like structures and the rest was less interpretable. To the best of our knowledge, this is the first time that the explicit kernel map of the Gaussian kernel for whitened natural image patches is shown to be related to Gabor wavelets. 4.2 Digit Classification on MNIST The MNIST dataset [22] consists of 60 000 images of handwritten digits for training and 10 000 for testing. We use two types of initial maps in our networks: the patch map , denoted by CNKPM and the gradient map , denoted by CNK-GM. We follow the evaluation methodology of [25] Figure 3: Filters obtained by the first layer of the convolutional kernel network on natural images. Tr. CNN Scat-1 Scat-2 CKN-GM1 CKN-GM2 CKN-PM1 CKN-PM2 [32] [18] [19] size [25] [8] [8] (12/50) (12/400) (200) (50/200) 300 7.18 4.7 5.6 4.39 4.24 5.98 4.15 NA 1K 3.21 2.3 2.6 2.60 2.05 3.23 2.76 NA 2K 2.53 1.3 1.8 1.85 1.51 1.97 2.28 NA 5K 1.52 1.03 1.4 1.41 1.21 1.41 1.56 NA 10K 0.85 0.88 1 1.17 0.88 1.18 1.10 NA 20K 0.76 0.79 0.58 0.89 0.60 0.83 0.77 NA 40K 0.65 0.74 0.53 0.68 0.51 0.64 0.58 NA 60K 0.53 0.70 0.4 0.58 0.39 0.63 0.53 0.47 0.45 0.53 Table 1: Test error in % for various approaches on the MNIST dataset without data augmentation. The numbers in parentheses represent the size p1 and p2 of the feature maps at each layer. for comparison when varying the training set size. We select the regularization parameter of the SVM by 5-fold cross validation when the training size is smaller than 20 000, or otherwise, we keep 10 0000 examples from the training set for validation. We report in Table 1 the results obtained for four simple architectures. CKN-GM1 is the simplest one: its second layer uses 3 3 patches and only p2 = 50 filters, resulting in a network with 5 400 parameters. Yet, it achieves an outstanding performance of 0.58% error on the full dataset. The best performing, CKN-GM2, is similar to CKN-GM1 but uses p2 = 400 filters. When working with raw patches, two layers (CKN-PM2) gives better results than one layer. More details about the network architectures are provided in the supplementary material. In general, our method achieves a state-of-the-art accuracy for this task since lower error rates have only been reported by using data augmentation [11]. 4.3 Visual Recognition on CIFAR-10 and STL-10 We now move to the more challenging datasets CIFAR-10 [20] and STL-10 [13]. We select the best architectures on a validation set of 10 000 examples from the training set for CIFAR-10, and by 5-fold cross-validation on STL-10. We report in Table 2 results for CKN-GM, defined in the previous section, without exploiting color information, and CKN-PM when working on raw RGB patches whose mean color is subtracted. The best selected models have always two layers, with 800 filters for the top layer. Since CKN-PM and CKN-GM exploit a different information, we also report a combination of such two models, CKN-CO, by concatenating normalized image representations together. The standard deviations for STL-10 was always below 0.7%. Our approach appears to be competitive with the state of the art, especially on STL-10 where only one method does better than ours, despite the fact that our models only use 2 layers and require learning few parameters. Note that better results than those reported in Table 2 have been obtained in the literature by using either data augmentation (around 90% on CIFAR-10 for [18, 30]), or external data (around 70% on STL-10 for [28]). We are planning to investigate similar data manipulations in the future. Method [12] [27] [18] [13] [4] [17] [32] CKN-GM CKN-PM CKN-CO CIFAR-10 82.0 82.2 88.32 79.6 NA 83.96 84.87 74.84 78.30 82.18 STL-10 60.1 58.7 NA 51.5 64.5 62.3 NA 60.04 60.25 62.32 Table 2: Classification accuracy in % on CIFAR-10 and STL-10 without data augmentation. 5 Conclusion In this paper, we have proposed a new methodology for combining kernels and convolutional neural networks. We show that mixing the ideas of these two concepts is fruitful, since we achieve near state-of-the-art performance on several datasets such as MNIST, CIFAR-10, and STL10, with simple architectures and no data augmentation. Some challenges regarding our work are left open for the future. The first one is the use of supervision to better approximate the kernel for the prediction task. The second consists in leveraging the kernel interpretation of our convolutional neural networks to better understand the theoretical properties of the feature spaces that these networks produce. Acknowledgments This work was partially supported by grants from ANR (project MACARON ANR-14-CE23-000301), MSR-Inria joint centre, European Research Council (project ALLEGRO), CNRS-Mastodons program (project GARGANTUA), and the Lab Ex PERSYVAL-Lab (ANR-11-LABX-0025). [1] Y. Bengio. Learning deep architectures for AI. Found. Trends Mach. Learn., 2009. [2] L. Bo, K. Lai, X. Ren, and D. Fox. Object recognition with hierarchical kernel descriptors. In Proc. CVPR, 2011. [3] L. Bo, X. Ren, and D. Fox. Kernel descriptors for visual recognition. In Adv. NIPS, 2010. [4] L. Bo, X. Ren, and D. Fox. Unsupervised feature learning for RGB-D based object recognition. In Experimental Robotics, 2013. [5] L. Bo and C. Sminchisescu. Efficient match kernel between sets of features for visual recognition. In Adv. NIPS, 2009. [6] L. Bottou, O. Chapelle, D. De Coste, and J. Weston. Large-Scale Kernel Machines (Neural Information Processing). The MIT Press, 2007. [7] J. V. Bouvrie, L. Rosasco, and T. Poggio. On invariance in hierarchical models. In Adv. NIPS, 2009. [8] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE T. Pattern Anal., 35(8):1872 1886, 2013. [9] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput., 16(5):1190 1208, 1995. [10] Y. Cho and L. K. Saul. Large-margin classification in infinite neural networks. Neural Comput., 22(10), 2010. [11] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In Proc. CVPR, 2012. [12] A. Coates and A. Y. Ng. Selecting receptive fields in deep networks. In Adv. NIPS, 2011. [13] A. Coates, A. Y. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning. In Proc. AISTATS, 2011. [14] D. Decoste and B. Sch olkopf. Training invariant support vector machines. Mach. Learn., 46(1-3):161 190, 2002. [15] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. De CAF: A deep convolutional activation feature for generic visual recognition. preprint ar Xiv:1310.1531, 2013. [16] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. J. Mach. Learn. Res., 9:1871 1874, 2008. [17] R. Gens and P. Domingos. Discriminative learning of sum-product networks. In Adv. NIPS, 2012. [18] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In Proc. ICML, 2013. [19] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. Le Cun. What is the best multi-stage architecture for object recognition? In Proc. ICCV, 2009. [20] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Tech. Rep., 2009. [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Adv. NIPS, 2012. [22] Y. Le Cun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. P. IEEE, 86(11):2278 2324, 1998. [23] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607 609, 1996. [24] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Adv. NIPS, 2007. [25] M. Ranzato, F.-J. Huang, Y-L. Boureau, and Y. Le Cun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In Proc. CVPR, 2007. [26] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. 2004. [27] K. Sohn and H. Lee. Learning invariant representations with local transformations. In Proc. ICML, 2012. [28] K. Swersky, J. Snoek, and R. P. Adams. Multi-task Bayesian optimization. In Adv. NIPS, 2013. [29] G. Wahba. Spline models for observational data. SIAM, 1990. [30] L. Wan, M. D. Zeiler, S. Zhang, Y. Le Cun, and R. Fergus. Regularization of neural networks using dropconnect. In Proc. ICML, 2013. [31] C. Williams and M. Seeger. Using the Nystr om method to speed up kernel machines. In Adv. NIPS, 2001. [32] M. D. Zeiler and R. Fergus. Stochastic pooling for regularization of deep convolutional neural networks. In Proc. ICLR, 2013. [33] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In Proc. ECCV, 2014.