# maximumentropy_fine_grained_classification__5e238ce7.pdf Maximum Entropy Fine-Grained Classification Abhimanyu Dubey Otkrist Gupta Ramesh Raskar Nikhil Naik Massachusetts Institute of Technology Cambridge, MA, USA {dubeya, otkrist, raskar, naik}@mit.edu Fine-Grained Visual Classification (FGVC) is an important computer vision problem that involves small diversity within the different classes, and often requires expert annotators to collect data. Utilizing this notion of small visual diversity, we revisit Maximum-Entropy learning in the context of fine-grained classification, and provide a training routine that maximizes the entropy of the output probability distribution for training convolutional neural networks on FGVC tasks. We provide a theoretical as well as empirical justification of our approach, and achieve stateof-the-art performance across a variety of classification tasks in FGVC, that can potentially be extended to any fine-tuning task. Our method is robust to different hyperparameter values, amount of training data and amount of training label noise and can hence be a valuable tool in many similar problems. 1 Introduction For Image Net [7] classification and similar large-scale classification tasks that span numerous diverse classes and millions of images, strongly discriminative learning by minimizing the cross-entropy from the labels improves performance for convolutional neural networks (CNNs). Fine-grained visual classification problems differ from such large-scale classification in two ways: (i) the classes are visually very similar to each other and are harder to distinguish between (see Figure 1a), and (ii) there are fewer training samples and therefore the training dataset might not be representative of the application scenario. Consider a technique that penalizes strongly discriminative learning, by preventing a CNN from learning a model that memorizes specific artifacts present in training images in order to minimize the cross-entropy loss from the training set. This is helpful in finegrained classification: for instance, if a certain species of bird is mostly photographed against a different background compared to other species, memorizing the background will lower generalization performance while lowering training cross-entropy error, since the CNN will associate the background to the bird itself. In this paper, we formalize this intuition and revisit the classical Maximum-Entropy regime, based on the following underlying idea: the entropy of the probability logit vector produced by the CNN is a measure of the peakiness or confidence of the CNN. Learning CNN models that have a higher value of output entropy will reduce the confidence of the classifier, leading in better generalization abilities when training with limited, fine-grained training data. Our contributions can be listed as follows: (i) we formalize the notion of fine-grained vs large-scale image classification based on a measure of diversity of the features, (ii) we derive bounds on the ℓ2 regularization of classifier weights based on this diversity and entropy of the classifier, (iii) we provide uniform convergence bounds on estimating entropy from samples in terms of feature diversity, (iv) we formulate a fine-tuning objective function that obtains state-of-the-art performance on five most-commonly used FGVC datasets across six widely-used CNN architectures, and (v) we analyze the effect of Maximum-Entropy training over different hyperparameter values, amount of training data, and amount of training label noise to demonstrate that our method is consistently robust to all the above. 32nd Conference on Neural Information Processing Systems (Neur IPS 2018), Montréal, Canada. Fine Grained Classification samples (Stanford Dogs) with typically low visual diversity Large-Scale classification samples (Image Net LSVRC12) with very high visual diversity Figure 1: (a) Samples from the CUB-200-2011 FGVC (top) and Image Net (bottom) datasets. (b) Plot of top 2 principal components (obtained from ILSVRC-training set on Google Net pool5 features) on Image Net (red) and CUB-200-2011 (blue) validation sets. CUB-200-2011 data is concentrated with less diversity, as hypothesized. 2 Related Work Maximum-Entropy Learning: The principle of Maximum-Entropy, proposed by Jaynes [16] is a classic idea in Bayesian statistics, and states that the probability distribution best representing the current state of knowledge is the one with the largest entropy, in context of testable information (such as accuracy). This idea has been explored in different domains of science, from statistical mechanics [1] and Bayesian inference [12] to unsupervised learning [8] and reinforcement learning [29, 27]. Regularization methods that penalize minimum entropy predictions have been explored in the context of semi-supervised learning [11], and on deterministic entropy annealing [36] for vector quantization. In the domain of machine learning, the regularization of the entropy of classifier weights has been used empirically [4, 42] and studied theoretically [37, 49]. In most treatments of the Maximum-Entropy principle in classification, emphasis has been given to the entropy of the weights of classifiers themselves [37]. In our formulation, we focus instead on the Maximum-Entropy principle applied to the prediction vectors. This formulation has been explored experimentally in the work of Pereyra et al.[33] for generic image classification. Our work builds on their analysis by providing a theoretical treatment of fine-grained classification problems, and justifies the application of Maximum-Entropy to target scenarios with limited diversity between classes with limited training data. Additionally, we obtain large improvements in fine-grained classification, which motivates the usage of the Maximum-Entropy training principle in the fine-tuning setting, opening up this idea to much broader range of applied computer vision problems. We also note the related idea of label smoothing regularization [41], which tries to prevent the largest logit from becoming much larger than the rest and shows improved generalization in large scale image classification problems. Fine-Grained Classification: Fine-Grained Visual Classification (FGVC) has been an active area of interest in the computer vision community. Typical fine-grained problems such as differentiating between animal and plant species, or types of food. Since background context can act as a distraction in most cases of FGVC, there has been research in improving the attentional and localization capabilities of CNN-based algorithms. Bilinear pooling [25] is an instrumental method that combines pairwise local features to improve spatial invariance. This has been extended by Kernel Pooling [6] that uses higher-order interactions instead of dot products proposed originally, and Compact Bilinear Pooling [9] that speeds up the bilinear pooling operation. Another approach to localization is the prediction of an affine transformation of the original image, as proposed by Spatial Transformer Networks [15]. Part-based Region CNNs [35] use region-wise attention to improve local features. Leveraging additional information such as pose and regions have also been explored [3, 46], along with robust image representations such as CNN filter banks [5], VLAD [17] and Fisher vectors [34]. Supplementing training data [21] and model averaging [30] have also had significant improvements. The central theme among current approaches is to increase the diversity of relevant features that are used in classification, either by removing irrelevant information (such as background) by better localization or pooling, or supplementing features with part and pose information, or more training data. Our method focuses on the classification task after obtaining features (and is hence compatible with existing approaches), by selecting the classifier that assumes the minimum information about the task by principle of Maximum-Entropy. This approach is very useful in context of fine-grained tasks, especially when fine-tuning from Image Net CNN models that are already over-parameterized. In the case of Maximum Entropy fine-tuning, we optimize the following objective: θ = arg min θ b Ex D [DKL ( y(x)||p(y|x; θ)) γH[p(y|x; θ)]] (1) Where θ represents the model parameters, and is initialized using a pretrained model such as Image Net [7] and γ is a hyperparameter. The entropy can be understood as a measure of the peakiness or indecisiveness of the classifier in its prediction for the given input. For instance, if the classifier is strongly confident in its belief of a particular class k, then all the mass will be concentrated at class k, giving us an entropy of 0. Conversely, if a classifier is equally confused between all C classes, we will obtain a value of log(C) of the entropy, which is the maximum value it can take. In problems such as fine-grained classification, where samples that belong to different classes can be visually very similar, it is a reasonable idea to prevent the classifier from being too confident in its outputs (have low entropy), since the classes themselves are so similar. 3.1 Preliminaries Consider the multi-class classification problem over C classes. The input domain is given by X RZ, with an accompanying probability metric px( ) defined over X. The training data is given by N i.i.d. samples D = {x1, ..., x N} drawn from X. Each point x X has an associated label y(x) = [0, ..., 1, ...0] RC. We learn a CNN such that for each point in X, the CNN induces a conditional probability distribution over the m classes whose mode matches the label y(x). A CNN architecture consists of a series of convolutional and subsampling layers that culminate in an activation Φ( ), which is fed to an C-way classifier with weights w = {w1, ..., w C} such that: p(yi|x; w, Φ( )) = exp w i Φ(x) PC j=1 exp w j Φ(x) (2) During training, we learn parameters w and feature extractor Φ( ) (collectively referred to as θ), by minimizing the expected KL (Kullback-Liebler)-divergence of the CNN conditional probability distribution from the true label vector over the training set D: θ = arg min θ b Ex D [DKL ( y(x)||p(y|x; θ))] (3) During fine-tuning, we learn a feature map Φ( ) from a large training set (such as Image Net), discard the original classifier w (referred now onwards as w S) and learn new weights w on the smaller dataset (note that the number of classes, and hence the shape of w, may also change for the new task). The entropy of conditional probability distribution in Equation 2 is given by: H[p( |x; θ)] i=1 p(yi|x; θ) log(p(yi|x; θ)) (4) To minimize the overall entropy of the classifier over a data distribution x px( ), we would be interested in the expected value of the entropy over the distribution: Ex px [H[p( |x; θ)]] = Z x px H[p( |x; θ)]px(x)dx (5) Similarly, the empirical average of the conditional entropy over the training set D is: b Ex D[H[p( |x; θ)]] = 1 i=1 H[p( |xi; θ)] (6) To have high training accuracy, we do not need to learn a model that gives zero cross-entropy loss. Instead, we only require a classifier to output a conditional probability distribution whose arg max coincides with the correct class. Next, we show that for problems with low diversity, higher validation accuracy can be obtained with a higher entropy (and higher training cross-entropy). We now formalize the notion of diversity in feature vectors over a data distribution. 3.2 Diversity and Fine-Grained Visual Classification We assume the pretrained n-dimensional feature map Φ( ) to be a multivariate mixture of m Gaussians, where m is unknown (and may be very large). Using an overall mean subtraction, we can re-center the Gaussian distribution to be zero-mean. Φ(x) for x px is then given by: i=1 αi N(µi, Σi), where x px, αi > 0 i and Ex px[Φ(x)] = 0, (7) where Σis are n-dimensional covariance matrices for each class i, and µi is the mean feature vector for class i. The zero-mean implies that µ = Pm i=1 αiµi = 0. For this distribution, the equivalent covariance matrix can be given by: Var[Φ(x)] = i=1 αi(µi µ)(µi µ) = i=1 αi(Σi + µiµ i ) Σ (8) Now, the eigenvalues λ1, ..., λn of the overall covariance matrix Σ characterize the variance of the distribution across n dimensions. Since Σ is positive-definite, all eigenvalues are positive (this can be shown using the fact that each covariance matrix is itself positive-definite, and diag(µiµ i )k = (µk i )2 0 i, k). Thus, to describe the variance of the feature distribution we define Diversity. Definition 1. Let the data distribution be px over space X, and feature extractor be given by Φ( ). Then, the Diversity ν of the features is defined as: i=1 λi, where {λ1, ..., λn} satisfy det(Σ λi In) = 0 This definition of diversity is consistent with multivariate analysis, and is a common measure of the total variance of a data distribution [18]. Now, let p L x ( ) denote the data distribution under a large-scale image classification task such as Image Net, and let p F x ( ) denote the data distribution under a fine-grained image classification task. We can then characterize fine-grained problems as data distributions p F x ( ) for any feature extractor Φ( ) that have the property: ν(Φ, p F x ) ν(Φ, p L x ) (9) On plotting pretrained Φ( ) for both the Image Net validation set and the validation set of CUB-2002011 (a fine-grained dataset), we see that the CUB-200-2011 features are concentrated with a lower variance compared to the Image Net training set (see Figure 1b), consistent with Equation 9. In the next section, we describe the connections of Maximum-Entropy with model selection in fine-grained classification. 3.3 Maximum-Entropy and Model Selection By the Tikhonov regularization of a linear classifier [10], we would want to select w such that P j wj 2 2 is small (ℓ2 regularization), to get higher generalization performance. This technique is also implemented in neural networks trained using stochastic gradient descent (SGD) by the process of weight-decay . Several recent works around obtaining spectrally-normalized risk bounds for neural networks have demonstrated that the excess risk scales with the Frobenius norm of the weights [31, 2]. Our next result provides some insight into how fine-grained problems can potentially limit model selection, by analysing the best-case generalization gap (difference between training and expected risk). We use the following result to lower-bound the norm of the weights w 2 = q PC i=1 wi 2 2 in terms of the expected entropy and the feature diversity: Theorem 1. Let the final layer weights be denoted by w = {w1, ..., w C}, the data distribution be px over X, and feature extractor be given by Φ( ). For the expected condtional entropy, the following holds true: w 2 log(C) Ex px[H[p( |x; θ)]] A full proof of Theorem 1 is included in the supplement. Let us consider the case when ν(Φ, px) is large (Image Net classification). In this case, this lower bound is very weak and inconsequential. However, in the case of small ν(Φ, px) (fine-grained classification), the denominator is small, and this lower bound can subsequently limit the space of model selection, by only allowing models with large values of weights, leading to a larger best-case generalization gap (that is, when, Theorem 1 holds with equality). We see that if the numerator is small, the diversity of the features has a smaller impact on limiting the model selection, and hence, it can be advantageous to maximize prediction entropy. We note that since this is a lower bound, the proof is primarily expository and we can only comment on best-case generalization performance. More intuitively, however, it can be understood that problems that are fine-grained will often require more information to distinguish between classes, and regularizing the prediction entropy prevents creating models that memorize a lot of information about the training data, and thus can potentially benefit generalization. In this sense, using a Maximum-Entropy objective function is similar to an online calibration of neural network predictions [13], to account for fine-grained problems. Now, Theorem 1 involves the expected conditional entropy over the data distribution. However, during training we only have sample access to the data distribution, which we can use as a surrogate. It is essential to then ensure that the empirical estimate of the conditional entropy (from N training samples) is an accurate estimate of the true expected conditional entropy. The next result ensures that for large N, in a fine-grained classification problem, the sample estimate of average conditional entropy is close to the expected conditional entropy. Theorem 2. Let the final layer weights be denoted by w = {w1, ..., w C}, the data distribution be px over X, and feature extractor be given by Φ( ). With probability at least 1 δ > 1 2 and w = max ( w1 2, ..., w C 2), we have: b ED[H[p( |x; θ)]] Ex px[H[p( |x; θ)]] w r 2 N ν(Φ, px) log(4 δ ) + e O N 0.75 A full proof of Theorem 2 is included in the supplement. We see that as long as the diversity of features is small, and N is large, our estimate for entropy will be close to the expected value. Using this result, we can express Theorem 1 in terms of the empirical mean conditional entropy. Corollary 1. With probability at least 1 δ > 1 2, the empirical mean conditional entropy follows: w 2 log(C) b Ex D[H[p( |x; θ)]] 2 q ν(Φ, px) e O N 0.75 A full proof of Corollary 1 is included in the supplement. We see that we recover the result from Theorem 1 as N . Corollary 1 shows that as long as the diversity of features is small, and N is large, the same conclusions drawn from Theorem 1 apply in the case of the empirical mean entropy as well. We will now proceed to describing the results obtained from maximum-entropy fine-grained classification. 4 Experiments We perform all experiments using the Py Torch [32] framework over a cluster of NVIDIA Titan X GPUs. We now describe our results on benchmark datasets in fine-grained recognition and some ablation studies. 4.1 Fine-Grained Visual Classification Maximum-Entropy training improves performance across five standard fine-grained datasets, with substantial gains in low-performing models. We obtain state-of-the-art results on all five datasets (Table 1-(A-E)). Since all these datasets are small, we report numbers averaged over 6 trials. Classification Accuracy: First, we observe that Maximum-Entropy training obtains significant performance gains when fine-tuning from models trained on the Image Net dataset (e.g., Goog Le Net (A) CUB-200-2011 [44] Method Top-1 Prior Work STN[15] 84.10 - Zhang et al. [47] 84.50 - Lin et al. [24] 85.80 - Cui et al. [6] 86.20 - Our Results Goog Le Net 68.19 (6.18) Max Ent-Goog Le Net 74.37 Res Net-50 75.15 (5.22) Max Ent-Res Net-50 80.37 VGGNet16 73.28 (3.74) Max Ent-VGGNet16 77.02 Bilinear CNN [25] 84.10 (1.17) Max Ent-Bilinear CNN 85.27 Dense Net-161 84.21 (2.33) Max Ent-Dense Net-161 86.54 (B) Cars [22] Method Top-1 Prior Work Wang et al. [45] 85.70 - Liu et al. [26] 86.80 - Lin et al. [24] 92.00 - Cui et al. [6] 92.40 - Our Results Goog Le Net 84.85 (2.17) Max Ent-Goog Le Net 87.02 Res Net-50 91.52 (2.33) Max Ent-Res Net-50 93.85 VGGNet16 80.60 (3.28) Max Ent-VGGNet16 83.88 Bilinear CNN [25] 91.20 (1.61) Max Ent-Bilinear CNN 92.81 Dense Net-161 91.83 (1.18) Max Ent-Dense Net-161 93.01 (C) Aircrafts [28] Method Top-1 Prior Work Simon et al. [38] 85.50 - Cui et al. [6] 86.90 - LRBP [20] 87.30 - Lin et al. [24] 88.50 - Our Results Goog Le Net 74.04 (5.12) Max Ent-Goog Le Net 79.16 Res Net-50 81.19 (2.67) Max Ent-Res Net-50 83.86 VGGNet16 74.17 (3.91) Max Ent-VGGNet16 78.08 Bilinear CNN [25] 84.10 (2.02) Max Ent-Bilinear CNN 86.12 Dense Net-161 86.30 (3.46) Max Ent-Dense Net-161 89.76 (D) NABirds [43] Method Top-1 Prior Work Branson et al. [3] 35.70 - Van et al. [43] 75.00 - Our Results Goog Le Net 70.66 (2.38) Max Ent-Goog Le Net 73.04 Res Net-50 63.55 (5.66) Max Ent-Res Net-50 69.21 VGGNet16 68.34 (4.28) Max Ent-VGGNet16 72.62 Bilinear CNN [25] 80.90 (1.76) Max Ent-Bilinear CNN 82.66 Dense Net-161 79.35 (3.67) Max Ent-Dense Net-161 83.02 (E) Stanford Dogs [19] Method Top-1 Prior Work Zhang et al. [48] 80.43 - Krause et al. [21] 80.60 - Our Results Goog Le Net 55.76 (6.25) Max Ent-Goog Le Net 62.01 Res Net-50 69.92 (3.64) Max Ent-Res Net-50 73.56 VGGNet16 61.92 (3.52) Max Ent-VGGNet16 65.44 Bilinear CNN [25] 82.13 (1.05) Max Ent-Bilinear CNN 83.18 Dense Net-161 81.18 (2.45) Max Ent-Dense Net-161 83.63 Table 1: Maximum-Entropy training (Max Ent) obtains state-of-the-art performance on five widelyused fine-grained visual classification datasets (A-E). Improvement over the baseline model is reported as ( ). All results averaged over 6 trials. [40], Resnet-50 [14]). For example, on the CUB-200-2011 dataset, fine-tuning Goog Le Net by standard fine-tuning gives an accuracy of 68.19%. Fine-tuning with Maximum-Entropy gives an accuracy of 74.37% which is a large improvement, and it is persistent across datasets. Since a lot of fine-tuning tasks use general base models such as Goog Le Net and Res Net, this result is relevant to the large number of applications that involve fine-tuning on specialized datasets. Maximum-Entropy classification also improves prediction performance for CNN architectures specifically designed for fine-grained visual classification. For instance, it improves the performance of the Bilinear CNN [25] on all 5 datasets and obtains state-of-the-art results, to the best of our knowledge. The gains are smaller, since these architectures improve diversity in the features by localization, and hence maximizing entropy is less crucial in this case. However, it is important to note that most pooling architectures [25] use a large model as a base-model (such as VGGNet [39]) and have an expensive pooling operation. Thus they are computationally very expensive, and infeasible for tasks that have resource constraints in terms of data and computation time. Increase in Generality of Features: We hypothesize that Maximum-Entropy training will encourage the classifier to reduce the specificity of the features. To evaluate this hypothesis, we perform the eigendecomposition of the covariance matrix on the pool5 layer features of Goog Le Net trained on CUB-200-2011, and analyze the trend of sorted eigenvalues (Figure 2a). We examine the features from CNNs with (i) no fine-tuning ( Basic ), (ii) regular fine-tuning, and (iii) fine-tuning with Maximum-Entropy. For a feature matrix with large covariance between the features of different classes, we would expect the first few eigenvalues to be large, and the rest to diminish quickly, since fewer orthogonal components can summarize the data. Conversely, in a completely uncorrelated feature matrix, we would see a longer tail in the decreasing magnitudes of eigenvalues. Figure 2a shows that for the Basic features (with no fine-tuning), there is a fat tail in both training and test sets due to the presence of a large number of uncorrelated features. After fine-tuning on the training data, we observe a Method CIFAR-10 CIFAR-100 Goog Le Net 84.16 (-0.06) 70.24 (3.26) Max Ent + Goog Le Net 84.10 73.50 Dense Net-121 92.19 (0.03) 75.01 (1.21) Max Ent + Dense Net-121 92.22 76.22 Table 2: Maximum Entropy obtains larger gains on the finer CIFAR-100 dataset as compared to CIFAR-10. Improvement over the baseline model is reported as ( ). Method Random-Image Net Dogs-Image Net Goog Le Net 71.85 (0.35) 62.28 (2.63) Max Ent + Goog Le Net 72.20 64.91 Res Net-50 82.01 (0.28) 73.81 (1.86) Max Ent + Res Net-50 82.29 75.66 Table 3: Maximum Entropy obtains larger gains on the a subset of Image Net containing dog sub-classes versus a randomly chosen subset of the same size which has higher visual diversity. Improvement over the baseline model (in cross-validation) is reported as ( ). reduction in the tail of the curve, implying that some generality in features has been introduced in the model through the fine-tuning. The test curve follows a similar decrease, justifying the increase in test accuracy. Finally, for Maximum-Entropy, we observe a substantial decrease in the width of the tail of eigenvalue magnitudes, suggesting a larger increase in generality of features in both training and test sets, which confirms our hypothesis. Effect on Prediction Probabilities: For Maximum-Entropy training, the predicted logit vector is smoother, leading to a higher cross entropy during both training and validation. We observe that the average value of the logit probability of the top predicted class decreases significantly with Maximum Entropy, as predicted by the mathematical formulation (for γ = 1). On CUB-200-2011 dataset for Goog Le Net architecture, with Maximum-Entropy, the mean probability of the top class is 0.34, as compared to 0.77 without it. Moreover, the tail of probability values is fatter with Maximum-Entropy, as depicted in Figure 2b. 200 400 600 800 1000 i Train Basic Train Fine-Tuned Train Fine-Tuned with Entropy Test Basic Test Fine-Tuned Test Fine-Tuned with Entropy 5 10 15 20 top-k index mean logit value Standard SGD SGD + Maximum-Entropy Figure 2: (a) Maximum-Entropy training encourages the network to reduce the specificity of the features, which is reflected in the longer tail of eigenvalues for the covariance matrix of pool5 Goog Le Net features for both training and test sets of CUB-200-2011. We plot the value of log(λi) for the ith eigenvalue λi obtained after decomposition of test set (dashed) and training set (solid) (for γ = 1). (b) For Maximum-Entropy training, the predicted logit vector is smoother with a fatter tail (Google Net on CUB-200-2011). 4.2 Ablation Studies CIFAR-10 and CIFAR-100: We evaluate Maximum-Entropy on the CIFAR-10 and CIFAR-100 datasets [23]. CIFAR-100 has the same set of images as CIFAR-10 but with finer category distinction in the labels, with each superclass of 20 containing five finer divisions, and a 100 categories in total. Therefore, we expect (and observe) that Maximum-Entropy training provides stronger gains on CIFAR-100 as compared to CIFAR-10 across models (Table 2). Method CUB-200-2011 Cars Aircrafts NABirds Stanford Dogs VGG-Net16 Max Ent 77.02 83.88 78.08 72.62 65.44 LSR 70.03 81.45 75.06 69.28 63.06 Res Net-50 Max Ent 80.37 93.85 83.86 69.21 73.56 LSR 78.20 92.04 81.26 64.02 70.03 Dense Net-161 Max Ent 86.54 93.01 89.76 83.02 83.63 LSR 84.86 91.96 87.05 80.11 82.98 Table 4: Maximum-Entropy training obtains much large gains on Fine-grained Visual Classification as compared to Label Smoothing Regularization (LSR) [40]. 10-4 10-3 10-2 10-1 100 101 102 103 104 105 test accuracy VGGNet-16 Bilinear CNN 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 percentage of label noise test accuracy SGD SGD + Maximum-Entropy 0 50 100 150 200 250 300 training epoch training cross-entropy γ = 0 γ = 0. 1 γ = 1 γ = 10 validation accuracy Figure 3: (a) Classification performance is robust to the choice of γ over a large region as shown here for CUB-200-2011 with models VGGNet-16 and Bilinear CNN. (b) Maximum-Entropy is more robust to increasing amounts of label noise (CUB-200-2011 on Google Net with γ = 1). (c) Maximum-Entropy obtains higher validation performance despite higher training cross-entropy loss. Image Net Ablation Experiment: To understand the effect of Maximum-Entropy training on datasets with more samples compared to the small fine-grained datasets, we create two synthetic datasets: (i) Random-Image Net, which is formed by selecting 116K images from a random subset of 117 classes of Image Net [7], and (ii) Dogs-Image Net, which is formed by selecting all classes from Image Net that have dogs as labels, which has the same number of images and classes as Random-Image Net. Dogs-Image Net has less diversity compared to Random-Image Net, and thus we expect the gains from Maximum-Entropy to be higher. On a 5-way cross-validation on both dataset, we observe higher gains on the Dogs-Image Net dataset for two CNN models (Table 3). Choice of Hyperparameter γ: An integral component of regularization is the choice of weighing parameter. We find that performance is fairly robust to the choice of γ (Figure 3a). Please see supplement for experiment-wise details. Robustness to Label Noise: In this experiment, we gradually introduce label noise by randomly permuting a fraction of labels for increasing fractions of total data. We follow an identical evaluation protocol as the previous experiment, and observe that Maximum-Entropy is more robust to label noise (Figure 3b). Training Cross-Entropy and Validation Accuracy: We expect Maximum-Entropy training to provide higher accuracy at the cost of higher training cross-entropy. In Figure 3c, we show that we achieve a higher validation accuracy when training with Maximum-Entropy despite the training cross-entropy loss converging to a higher value. Comparison with Label-Smoothing Regularization: Label-Smoothing Regularization [40] penalizes the KL-divergence of the classifier logits from the uniform distribution and is also a method to prevent peaky distributions. On comparing performance with Label-Smoothing Regularization, we found that Maximum-Entropy provides much larger gains on fine-grained recognition (see Table 4). 5 Discussion and Conclusion Many real-world applications of computer vision models involve extensive fine-tuning on small, relatively imbalanced datasets with much smaller diversity in the training set compared to the largescale models they are fine-tuned from, a notable example of which is fine-grained recognition. In this domain, Maximum-Entropy training provides an easy-to-implement and simple to understand training schedule that consistently improves performance. There are several extensions, however, that can be explored: explicitly enforcing a large diversity in the features through a different regularizer might be an interesting extension to this study, as well as potential extensions to large-scale problems by tackling clusters of diverse objects separately. We leave these as a future study with our results as a starting point. Acknowledgements: We thank Ryan Farrell, Pei Guo, Xavier Boix, Dhaval Adjodah, Spandan Madan, and Ishaan Grover for their feedback on the project and Google s Tensor Flow Research Cloud Program for providing TPU computing resources. [1] Sumiyoshi Abe and Yuko Okamoto. Nonextensive statistical mechanics and its applications, volume 560. Springer Science & Business Media, 2001. [2] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems, pages 6240 6249, 2017. [3] Steve Branson, Grant Van Horn, Serge Belongie, and Pietro Perona. Bird species categorization using pose normalized deep convolutional nets. ar Xiv preprint ar Xiv:1406.2952, 2014. [4] Yihua Chen, Eric K Garcia, Maya R Gupta, Ali Rahimi, and Luca Cazzanti. Similarity-based classification: Concepts and algorithms. Journal of Machine Learning Research, 10(Mar):747 776, 2009. [5] Mircea Cimpoi, Subhransu Maji, and Andrea Vedaldi. Deep filter banks for texture recognition and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3828 3836, 2015. [6] Yin Cui, Feng Zhou, Jiang Wang, Xiao Liu, Yuanqing Lin, and Serge Belongie. Kernel pooling for convolutional neural networks. IEEE Conference on Computer Vision and Pattern Recognition, 2017. [7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Image Net: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. [8] Mario A. T. Figueiredo and Anil K. Jain. Unsupervised learning of finite mixture models. IEEE Transactions on pattern analysis and machine intelligence, 24(3):381 396, 2002. [9] Yang Gao, Oscar Beijbom, Ning Zhang, and Trevor Darrell. Compact bilinear pooling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 317 326, 2016. [10] Gene H Golub, Per Christian Hansen, and Dianne P O Leary. Tikhonov regularization and total least squares. SIAM Journal on Matrix Analysis and Applications, 21(1):185 194, 1999. [11] Yves Grandvalet and Yoshua Bengio. Entropy regularization. [12] Stephen F Gull. Bayesian inductive inference and maximum entropy. In Maximum-entropy and Bayesian methods in science and engineering, pages 53 74. Springer, 1988. [13] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. ar Xiv preprint ar Xiv:1706.04599, 2017. [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770 778, 2016. [15] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. In Advances in Neural Information Processing Systems, pages 2017 2025, 2015. [16] Edwin T Jaynes. Information theory and statistical mechanics. Physical review, 106(4):620, 1957. [17] Herve Jegou, Florent Perronnin, Matthijs Douze, Jorge Sánchez, Patrick Perez, and Cordelia Schmid. Aggregating local image descriptors into compact codes. IEEE transactions on pattern analysis and machine intelligence, 34(9):1704 1716, 2012. [18] Dag Jonsson. Some limit theorems for the eigenvalues of a sample covariance matrix. Journal of Multivariate Analysis, 12(1):1 38, 1982. [19] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Fei-Fei Li. Novel dataset for fine-grained image categorization: Stanford dogs. [20] Shu Kong and Charless Fowlkes. Low-rank bilinear pooling for fine-grained classification. IEEE Conference on Computer Vision and Pattern Recognition, pages 7025 7034, 2017. [21] Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, and Li Fei-Fei. The unreasonable effectiveness of noisy data for fine-grained recognition. In European Conference on Computer Vision, pages 301 320. Springer, 2016. [22] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 554 561, 2013. [23] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The cifar-10 dataset, 2014. [24] Tsung-Yu Lin and Subhransu Maji. Improved bilinear pooling with cnns. ar Xiv preprint ar Xiv:1707.06772, 2017. [25] Tsung-Yu Lin, Aruni Roy Chowdhury, and Subhransu Maji. Bilinear cnn models for fine-grained visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 1449 1457, 2015. [26] Maolin Liu, Chengyue Yu, Hefei Ling, and Jie Lei. Hierarchical joint cnn-based models for fine-grained cars recognition. In International Conference on Cloud Computing and Security, pages 337 347. Springer, 2016. [27] Yuping Luo, Chung-Cheng Chiu, Navdeep Jaitly, and Ilya Sutskever. Learning online alignments with continuous rewards policy gradient. ar Xiv preprint ar Xiv:1608.01281, 2016. [28] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. ar Xiv preprint ar Xiv:1306.5151, 2013. [29] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pages 1928 1937, 2016. [30] Mohammad Moghimi, Mohammad Saberian, Jian Yang, Li-Jia Li, Nuno Vasconcelos, and Serge Belongie. Boosted convolutional neural networks. In British Machine Vision Conference (BMVC), York, UK, 2016. [31] Behnam Neyshabur, Srinadh Bhojanapalli, David Mc Allester, and Nathan Srebro. A pac-bayesian approach to spectrally-normalized margin bounds for neural networks. ar Xiv preprint ar Xiv:1707.09564, 2017. [32] Adam Paskze and Soumith Chintala. Tensors and Dynamic neural networks in Python with strong GPU acceleration. https://github.com/pytorch. Accessed: [January 1, 2017]. [33] Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. Regularizing neural networks by penalizing confident output distributions. ar Xiv preprint ar Xiv:1701.06548, 2017. [34] Florent Perronnin, Jorge Sánchez, and Thomas Mensink. Improving the fisher kernel for large-scale image classification. Computer Vision ECCV 2010, pages 143 156, 2010. [35] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91 99, 2015. [36] Kenneth Rose. Deterministic annealing for clustering, compression, classification, regression, and related optimization problems. Proceedings of the IEEE, 86(11):2210 2239, 1998. [37] John Shawe-Taylor and David Hardoon. Pac-bayes analysis of maximum entropy classification. In Artificial Intelligence and Statistics, pages 480 487, 2009. [38] Marcel Simon, Erik Rodner, Yang Gao, Trevor Darrell, and Joachim Denzler. Generalized orderless pooling performs implicit salient matching. ar Xiv preprint ar Xiv:1705.00487, 2017. [39] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ar Xiv preprint ar Xiv:1409.1556, 2014. [40] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1 9, 2015. [41] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818 2826, 2016. [42] Martin Szummer and Tommi Jaakkola. Partially labeled classification with markov random walks. In Advances in neural information processing systems, pages 945 952, 2002. [43] Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber, Jessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Belongie. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 595 604, 2015. [44] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. [45] Yaming Wang, Jonghyun Choi, Vlad Morariu, and Larry S. Davis. Mining discriminative triplets of patches for fine-grained classification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. [46] Ning Zhang, Ryan Farrell, and Trever Darrell. Pose pooling kernels for sub-category recognition. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3665 3672. IEEE, 2012. [47] Xiaopeng Zhang, Hongkai Xiong, Wengang Zhou, Weiyao Lin, and Qi Tian. Picking deep filter responses for fine-grained image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1134 1142, 2016. [48] Yu Zhang, Xiu-Shen Wei, Jianxin Wu, Jianfei Cai, Jiangbo Lu, Viet-Anh Nguyen, and Minh N Do. Weakly supervised fine-grained categorization with part-based image representation. IEEE Transactions on Image Processing, 25(4):1713 1725, 2016. [49] Jun Zhu and Eric P Xing. Maximum entropy discrimination markov networks. Journal of Machine Learning Research, 10(Nov):2531 2569, 2009.