# hopfield_networks_is_all_you_need__6e4c24ff.pdf HOPFIELD NETWORKS IS ALL YOU NEED Hubert Ramsauer Bernhard Schäfl Johannes Lehner Philipp Seidl Michael Widrich Thomas Adler Lukas Gruber Markus Holzleitner David Kreil Michael Kopp Günter Klambauer Johannes Brandstetter Sepp Hochreiter , ELLIS Unit Linz, LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria Institute of Advanced Research in Artificial Intelligence (IARAI) Email: {ramsauer,schaefl,brandstetter,hochreit}@ml.jku.at We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new Hopfield network can store exponentially (with the dimension of the associative space) many patterns, retrieves the pattern with one update, and has exponentially small retrieval errors. It has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points which store a single pattern. The new update rule is equivalent to the attention mechanism used in transformers. This equivalence enables a characterization of the heads of transformer models. These heads perform in the first layers preferably global averaging and in higher layers partial averaging via metastable states. The new modern Hopfield network can be integrated into deep learning architectures as layers to allow the storage of and access to raw input data, intermediate results, or learned prototypes. These Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms. We demonstrate the broad applicability of the Hopfield layers across various domains. Hopfield layers improved state-of-the-art on three out of four considered multiple instance learning problems as well as on immune repertoire classification with several hundreds of thousands of instances. On the UCI benchmark collections of small classification tasks, where deep learning methods typically struggle, Hopfield layers yielded a new state-ofthe-art when compared to different machine learning methods. Finally, Hopfield layers achieved state-of-the-art on two drug design datasets. The implementation is available at: https://github.com/ml-jku/hopfield-layers 1 INTRODUCTION The deep learning community has been looking for alternatives to recurrent neural networks (RNNs) for storing information. For example, linear memory networks use a linear autoencoder for sequences as a memory (Carta et al., 2020). Additional memories for RNNs like holographic reduced representations (Danihelka et al., 2016), tensor product representations (Schlag & Schmidhuber, 2018; Schlag et al., 2019) and classical associative memories (extended to fast weight approaches) (Schmidhuber, 1992; Ba et al., 2016a;b; Zhang & Zhou, 2017; Schlag et al., 2021) have been suggested. Most approaches to new memories are based on attention. The neural Turing machine (NTM) is equipped with an external memory and an attention process (Graves et al., 2014). Memory networks (Weston et al., 2014) use an arg max attention by first mapping a query and patterns into a space and then retrieving the pattern with the largest dot product. End to end memory networks (EMN) make this attention scheme differentiable by replacing arg max through a softmax (Sukhbaatar et al., 2015a;b). EMN with dot products became very popular and implement a key-value attention (Daniluk et al., 2017) for self-attention. An enhancement of EMN is the transformer (Vaswani et al., 2017a;b) and its extensions (Dehghani et al., 2018). The transformer has had a great impact on the natural language processing (NLP) community, in particular via the BERT models (Devlin et al., 2018; 2019). Contribution of this work: (i) introducing novel deep learning layers that are equipped with a memory via modern Hopfield networks, (ii) introducing a novel energy function and a novel update rule for continuous modern Hopfield networks that are differentiable and typically retrieve patterns after one update. Differentiability is required for gradient descent parameter updates and retrieval with one update is compatible with activating the layers of deep networks. We suggest using modern Hopfield networks to store information or learned prototypes in different layers of neural networks. Binary Hopfield networks were introduced as associative memories that can store and retrieve patterns (Hopfield, 1982). A query pattern can retrieve the pattern to which it is most similar or an average over similar patterns. Hopfield networks seem to be an ancient technique, however, new energy functions improved their properties. The stability of spurious states or metastable states was sensibly reduced (Barra et al., 2018). The largest and most impactful successes are reported on increasing the storage capacity of Hopfield networks. In a d-dimensional space, the standard Hopfield model can store d uncorrelated patterns without errors but only Cd/ log(d) random patterns with C < 1/2 for a fixed stable pattern or C < 1/4 if all patterns are stable (Mc Eliece et al., 1987). The same bound holds for nonlinear learning rules (Mazza, 1997). Using tricks-of-trade and allowing small retrieval errors, the storage capacity is about 0.138d (Crisanti et al., 1986; Hertz et al., 1991; Torres et al., 2002). If the learning rule is not related to the Hebb rule, then up to d patterns can be stored (Abu-Mostafa & St Jacques, 1985). For Hopfield networks with non-zero diagonal matrices, the storage can be increased to Cd log(d) (Folli et al., 2017). In contrast to the storage capacity, the number of energy minima (spurious states, stable states) of Hopfield networks is exponential in d (Tanaka & Edwards, 1980; Bruck & Roychowdhury, 1990; Wainrib & Touboul, 2013). The standard binary Hopfield network has an energy function that can be expressed as the sum of interaction functions F with F(x) = x2. Modern Hopfield networks, also called dense associative memory (DAM) models, use an energy function with interaction functions of the form F(x) = xn and, thereby, achieve a storage capacity proportional to dn 1 (Krotov & Hopfield, 2016; 2018). The energy function of modern Hopfield networks makes them robust against adversarial attacks (Krotov & Hopfield, 2018). Modern binary Hopfield networks with energy functions based on interaction functions of the form F(x) = exp(x) even lead to storage capacity of 2d/2, where all stored binary patterns are fixed points but the radius of attraction vanishes (Demircigil et al., 2017). However, in order to integrate Hopfield networks into deep learning architectures, it is necessary to make them differentiable, that is, we require continuous Hopfield networks (Hopfield, 1984; Koiran, 1994). Therefore, we generalize the energy function of Demircigil et al. (2017) that builds on exponential interaction functions to continuous patterns and states and obtain a new modern Hopfield network. We also propose a new update rule which ensures global convergence to stationary points of the energy (local minima or saddle points). We prove that our new modern Hopfield network typically retrieves patterns in one update step (ϵ-close to the fixed point) with an exponentially low error and has a storage capacity proportional to c d 1 4 (reasonable settings for c = 1.37 and c = 3.15 are given in Theorem 3). The retrieval of patterns with one update is important to integrate Hopfield networks in deep learning architectures, where layers are activated only once. Surprisingly, our new update rule is also the key-value attention as used in transformer and BERT models (see Fig. 1). Our modern Hopfield networks can be integrated as a new layer in deep learning architectures for pooling, memory, prototype learning, and attention. We test these new layers on different benchmark datasets and tasks like immune repertoire classification. Figure 1: We generalize the energy of binary modern Hopfield networks to continuous states while keeping fast convergence and storage capacity properties. We also propose a new update rule that minimizes the energy. The new update rule is the attention mechanism of the transformer. Formulae are modified to express softmax as row vector. = -sign means keeps the properties . 2 MODERN HOPFIELD NETS WITH CONTINUOUS STATES New energy function for continuous state Hopfield networks. In order to integrate modern Hopfield networks into deep learning architectures, we have to make them continuous. To allow for continuous states, we propose a new energy function that is a modification of the energy of modern Hopfield networks (Demircigil et al., 2017). We also propose a new update rule which can be proven to converge to stationary points of the energy (local minima or saddle points). We have N stored (key) patterns xi Rd represented by the matrix X = (x1, . . . , x N) with the largest pattern M = maxi xi . The state (query) pattern is ξ Rd. For exponential interaction functions, we need the log-sum-exp function (lse) for 0 < β lse(β, x) = β 1 log i=1 exp(βxi) which is convex (see appendix Eq. (461), and Lemma A22). The energy function E of the modern Hopfield networks for binary patterns xi and a binary state pattern ξ is E = PN i=1 F ξT xi (Krotov & Hopfield, 2016). Here, F(x) = xn is the interaction function, where n = 2 gives the classical Hopfield network. The storage capacity is proportional to dn 1 (Krotov & Hopfield, 2016). This model was generalized by Demircigil et al. (2017) to exponential interaction functions F(x) = exp(x) which gives the energy E = exp(lse(1, XT ξ)). This energy leads to an exponential storage capacity of N = 2d/2 for binary patterns. Furthermore, with a single update, the fixed point is recovered with high probability for random patterns. However, still this modern Hopfield network has binary states. We generalize this energy function to continuous-valued patterns while keeping the properties of the modern Hopfield networks like the exponential storage capacity and the extremely fast convergence (see Fig. 1). For the new energy we take the logarithm of the negative energy of modern Hopfield networks and add a quadratic term of the current state. The quadratic term ensures that the norm of the state vector ξ remains finite and the energy is bounded. Classical Hopfield networks do not require to bound the norm of their state vector, since it is binary and has fixed length. We define the novel energy function E as E = lse(β, XT ξ) + 1 2ξT ξ + β 1 log N + 1 We have 0 E 2M 2 (see appendix Lemma A1). Using p = softmax(βXT ξ), we define a novel update rule (see Fig. 1): ξnew = f(ξ) = Xp = Xsoftmax(βXT ξ) . (3) The next theorem states that the update rule Eq. (3) converges globally. The proof uses the Concave Convex Procedure (CCCP) (Yuille & Rangarajan, 2002; 2003), which is equivalent to Legendre minimization (Rangarajan et al., 1996; 1999) algorithms (Yuille & Rangarajan, 2003). Theorem 1. The update rule Eq. (3) converges globally: For ξt+1 = f(ξt), the energy E(ξt) E(ξ ) for t and a fixed point ξ . Proof. The update rule in Eq. (3) is the CCCP for minimizing the energy E, which is the sum of the convex 1/2ξT ξ and concave lse (see details in appendix Theorem 1). Theorem 2 in Yuille & Rangarajan (2002) yields the global convergence property. Also, in Theorem 2 in Sriperumbudur & Lanckriet (2009) the global convergence of CCCP is proven via a rigorous analysis using Zangwill s global convergence theory of iterative algorithms. The global convergence theorem only assures that for the energy E(ξt) E(ξ ) for t but not ξt ξ . The next theorem strengthens Zangwill s global convergence theorem (Meyer, 1976) and gives convergence results similar to those known for expectation maximization (Wu, 1983). Theorem 2. For the iteration Eq. (3) we have E (ξt) E (ξ ) = E as t , for some stationary point ξ . Furthermore, ξt+1 ξt 0 and either {ξt} t=0 converges or, in the other case, the set of limit points of {ξt} t=0 is a connected and compact subset of L (E ), where L (a) = {ξ L | E (ξ) = a} and L is the set of stationary points of the iteration Eq. (3). If L (E ) is finite, then any sequence {ξt} t=0 generated by the iteration Eq. (3) converges to some ξ L (E ). For a proof, see appendix Theorem 2. Therefore, all the limit points of any sequence generated by the iteration Eq. (3) are stationary points (local minima or saddle points) of the energy function E. Either the iteration converges or, otherwise, the set of limit points is a connected and compact set. The next theorem gives the results on the storage capacity of our new continuous state modern Hopfield network. We first define what we mean by storing and retrieving patterns using a modern Hopfield network with continuous states. Definition 1 (Pattern Stored and Retrieved). We assume that around every pattern xi a sphere Si is given. We say xi is stored if there is a single fixed point x i Si to which all points ξ Si converge, and Si Sj = for i = j. We say xi is retrieved for a given ϵ if iteration (update rule) Eq. (3) gives a point xi that is at least ϵ-close to the single fixed point x i Si. The retrieval error is xi xi . As with classical Hopfield networks, we consider patterns on the sphere, i.e. patterns with a fixed norm. For randomly chosen patterns, the number of patterns that can be stored is exponential in the dimension d of the space of the patterns (xi Rd). Theorem 3. We assume a failure probability 0 < p 1 and randomly chosen patterns on the sphere with radius M := K d 1. We define a := 2 d 1(1 + ln(2βK2p(d 1))), b := 2K2β 5 , and c := b W0(exp(a+ln(b)), where W0 is the upper branch of the Lambert W function (Olver et al., 2010, (4.13)), and ensure c 2 p 4 d 1 . Then with probability 1 p, the number of random patterns that can be stored is Therefore it is proven for c 3.1546 with β = 1, K = 3, d = 20 and p = 0.001 (a + ln(b) > 1.27) and proven for c 1.3718 with β = 1, K = 1, d = 75, and p = 0.001 (a + ln(b) < 0.94). For a proof, see appendix Theorem A5. The next theorem states that the update rule typically retrieves patterns after one update. Retrieval of a pattern xi for fixed point x i and query ξ is defined via an ϵ by f(ξ) x i < ϵ, that is, the update is ϵ-close to the fixed point. Retrieval with one update is crucial to integrate modern Hopfield networks into deep learning architectures, where layers are activated only once. First we need the concept of separation of a pattern. For pattern xi we define its separation i to other patterns by: i := min j,j =i x T i xi x T i xj = x T i xi max j,j =i x T i xj . (5) The update rule retrieves patterns with one update for well separated patterns, that is, patterns with large i. Theorem 4. With query ξ, after one update the distance of the new point f(ξ) to the fixed point x i is exponentially small in the separation i. The precise bounds using the Jacobian J = f(ξ) ξ and its value Jm in the mean value theorem are: f(ξ) x i Jm 2 ξ x i , (6) Jm 2 2 β N M 2 (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) . (7) For given ϵ and sufficient large i, we have f(ξ) x i < ϵ, that is, retrieval with one update. See proof in appendix Theorem A8. At the same time, the retrieval error decreases exponentially with the separation i. Theorem 5 (Exponentially Small Retrieval Error). The retrieval error f(ξ) xi of pattern xi is bounded by f(ξ) xi 2 (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) M (8) and for xi x i 1 2 β M together with xi ξ 1 2 β M by xi x i 2 e (N 1) M exp( β i) . (9) See proof in appendix Theorem A9. Metastable states and one global fixed point. So far, we considered patterns xi that are well separated and the iteration converges to a fixed point which is near a pattern xi. If no pattern xi is well separated from the others, then the iteration converges to a global fixed point close to the arithmetic mean of the vectors. In this case the softmax vector p is close to uniform, that is, pi = 1/N. If some vectors are similar to each other and well separated from all other vectors, then a metastable state near the similar vectors exists. Iterations that start near the metastable state converge to this metastable state, also if initialized by one of the similar patterns. For convergence proofs to one global fixed point and to metastable states see appendix Lemma A7 and Lemma A12, respectively. Hopfield update rule is attention of the transformer. The Hopfield network update rule is the attention mechanism used in transformer and BERT models (see Fig. 1). To see this, we assume N stored (key) patterns yi and S state (query) patterns ri that are mapped to the Hopfield space of dimension dk. We set xi = W T Kyi, ξi = W T Q ri, and multiply the result of our update rule with WV . The matrices Y = (y1, . . . , y N)T and R = (r1, . . . , r S)T combine the yi and ri as row vectors. We define the matrices XT = K = Y WK, ΞT = Q = RWQ, and V = Y WKWV = XT WV , where WK Rdy dk, WQ Rdr dk, WV Rdk dv. If β = 1/ dk and softmax RN is changed to a row vector, we obtain for the update rule Eq. (3) multiplied by WV : Z = softmax 1/ p dk Q KT V = softmax β R WQ W T KY T Y WKWV . (10) The left part of Eq. (10) is the transformer attention. In the transformer self-attention R = Y , and WKWV replaced by just WV . Besides the attention mechanism, Hopfield networks allow for other functionalities in deep network architectures, which we introduce via specific layers in the next section. The right part of Eq. (10) serves to explain these specific layers. 3 NEW HOPFIELD LAYERS FOR DEEP LEARNING Modern Hopfield networks with continuous states can be integrated into deep learning architectures, because they are continuous and differentiable with respect to their parameters. Furthermore, they typically retrieve patterns with one update, which is conform to deep learning layers that are activated only once. For these two reasons, modern Hopfield networks can serve as specialized layers in deep networks to equip them with memories. Below, we introduce three types of Hopfield layers: Hopfield, Hopfield Pooling, and Hopfield Layer. Possible applications of Hopfield layers in deep network architectures comprise: multiple instance learning (MIL) (Dietterich et al., 1997), processing of and learning with point sets (Qi et al., 2017a;b; Xu et al., 2018), set-based and permutation invariant learning (Guttenberg et al., 2016; Ravanbakhsh et al., 2016; Zaheer et al., 2017; Korshunova et al., 2018; Ilse et al., 2018; Zhai et al., 2020), attention-based learning (Vaswani et al., 2017a), deep learning with associative memories (Graves et al., 2014; Weston et al., 2014; Ba et al., 2016a;b; Schlag & Schmidhuber, 2018; Schlag et al., 2019), natural language processing (Devlin et al., 2018; 2019), sequence analysis and time series prediction (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997; Cho et al., 2014), and storing and retrieving reference data, e.g. the training data, outliers, high error data points, prototypes or cluster centers, support vectors & border cases. Hopfield network layers can substitute existing layers like pooling layers, permutation equivariant layers (Guttenberg et al., 2016; Ravanbakhsh et al., 2016), GRU (Cho et al., 2014) & LSTM (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997) layers, and attention layers (Vaswani et al., 2017a;b; Bahdanau et al., 2014). Figure 2: Left: A standard deep network with layers ( ) propagates either a vector or a set of vectors from the input to the output. Right: A deep network, where layers ( ) are equipped with associative memories via Hopfield layers ( ). Types of neural networks. We consider two types of feedforward neural networks: (I) Neural networks that propagate an activation vector from the input layer to the output layer. Examples are fully-connected or convolutional neural networks. (II) Neural networks that propagate a set of vectors from the input layer to the output layer, where each layer applies the same operation to each element of the set and the output layer may summarize the set via a vector. An example is the transformer. Recurrent neural networks are networks of type (I), which are iteratively applied to a set or a sequence, where intermediate results are stored in a memory and can be reused. Modern Hopfield networks can be integrated into both types of neural network architectures and enable to equip each of their layers with associative memories. See Fig. 2. Types of new Hopfield layers. We introduce three types of Hopfield layers: Hopfield, Hopfield Pooling, and Hopfield Layer. The continuous modern Hopfield network results in a plethora of new deep learning architectures, since we can (a) propagate sets or single vectors, (b) propagate queries, stored patterns, or both, (c) learn static queries or stored patterns, (d) fill the memory by training sets, prototypes, or external data. Next, we provide three useful types of Hopfield layers. The implementation is available at: https://github.com/ml-jku/hopfield-layers (1) Layer Hopfield for networks that propagate sets of vectors via state (query) patterns R and stored (key) patterns Y . The layer Hopfield is the realization of formula (10). The memory of the Hopfield layer can be filled with sets from the input or previous layers, see Fig. 3. The memory may be filled with a reference set, which is covered by providing the reference set as additional input. Thus, the layer Hopfield allows the association of two sets. A prominent example of a layer that performs such association is the transformer attention mechanism, which associates keys and queries, e.g. two point sets that have to be compared. This layer allows for different kinds of sequence-to-sequence learning, point set operations, and retrieval-based methods. The layer Hopfield with skip connections in a Res Net architecture is identical to the popular transformer and BERT models. In the experiments, we analyzed these Hopfield layers in transformer architectures. In our experiments in which we compare machine learning methods on small datasets of the UCI benchmark collection the layer Hopfield is also used. = softmax( ) = softmax( ) Figure 3: The layer Hopfield allows the association of two sets R ( ) and Y ( ). It can be integrated into deep networks that propagate sets of vectors. The Hopfield memory is filled with a set from either the input or previous layers. The output is a set of vectors Z ( ). (2) Layer Hopfield Pooling for networks that propagate patterns via the stored (key) patterns Y . This layer performs a pooling or summarization of sets Y obtained from queries in previous layers or the input. The memory of the Hopfield Pooling layer is filled with sets from the input or previous layers. The Hopfield Pooling layer uses the queries to search for patterns in the memory, the stored set. If more patterns are similar to a particular search pattern (query), then the result is an average over these patterns. The state (query) patterns of each layer are static and can be learned. Multiple queries supply a set to the next layer, where each query corresponds to one element of the set. Thus, the layer Hopfield Pooling enables fixed pattern search, pooling operations, and memories like LSTMs or GRUs. The static pattern functionality is typically needed if particular patterns must be identified in the data. A single Hopfield Pooling layer allows for multiple instance learning. Static state (query) patterns together with position encoding in the keys allows for performing pooling operations. The position encoding can be two-dimensional, where standard convolutional filters can be constructed as in convolutional neural networks (CNNs). The Hopfield Pooling layer can substitute pooling, averaging, LSTM, and permutation equivariant layers. See Fig. 4. The layer Hopfield Pooling is used for experiments with multiple instance learning tasks, e.g. for immune repertoire classification in the experiments. = softmax ( ) = softmax ( ) Figure 4: The layer Hopfield Pooling enables pooling or summarization of sets, which are obtained from the input or from previous layers. The input Y ( ) can be either a set or a sequence. The query patterns of each layer are static and can be learned. The output is a set of vectors Z ( ), where the number of vectors equals the number of query patterns. The layer Hopfield Pooling can realize multiple instance learning. (3) Layer Hopfield Layer for networks that propagate a vector or a set of vectors via state (query) patterns R. The queries R can be input vectors or queries that are computed from the output of previous layers. The memory of the Hopfield Layer layer is filled with a fixed set, which can be the training set, a reference set, prototype set, or a learned set (a learned matrix). The stored (key) patterns are static and can be learned. If the training set is stored in the memory, then each layer constructs a new set of queries based on the query results of previous layers. The stored patterns can be initialized by the training set or a reference set and then learned, in which case they deviate from the training set. The stored patterns can be interpreted as weights from the state (query) to hidden neurons that have a softmax activation function (Krotov & Hopfield, 2020). The layer Hopfield Layer can substitute a fully connected layer, see Fig. 5. A single Hopfield Layer layer also allows for approaches similar to support vector machines (SVMs), approaches similar to k-nearest neighbor, approaches similar to learning vector quantization, and pattern search. For classification, the raw data yi = (zi, ti) can be the concatenation of input zi and target ti. In this case, the matrices WK and WV can be designed such that inside the softmax the input zi is used and outside the softmax the target ti. Thus, the softmax provides a weighted average of the target vectors based on the similarity between the query and the inputs. Also SVM models, k-nearest neighbor, and learning vector quantization can be considered as weighted averages of the targets. The encoder-decoder attention layer of the transformers are a Hopfield Layer layer, where the memory is filled with the encoder output set. In our experiments with the drug design benchmark datasets, the layer Hopfield Layer has been applied and compared to other machine learning methods. = softmax ( ) = softmax ( ) Figure 5: The layer Hopfield Layer enables multiple queries of the training set, a reference set, prototype set, or a learned set (a learned matrix). The queries for each layer are computed from the results of previous layers. The input is a set of vectors R ( ). The output is also a set of vectors Z ( ), where the number of output vectors equals the number of input vectors. The layer Hopfield Layer can realize SVM models, k-nearest neighbor, and LVQ. Additional functionality of new Hopfield layers. The insights about energy, convergence, and storage properties provide all new Hopfield layers with additional functionalities: i) multiple updates to control how precise fixed points are found without additional parameters needed. ii) variable β to determine the kind of fixed points such as the size of metastable states. The variable β controls over how many patterns is averaged. As observed in the experiments, the variable is relevant in combination with the learning rate to steer the learning dynamics. The parameter β governs the fixed point dynamics and can be learned, too. iii) controlling the storage capacity via the dimension of the associative space. The storage capacity can be relevant for tasks with a huge number of instances as in the immune repertoire classification experiment. iv) pattern normalization controls, like the layernorm, the fixed point dynamics by the norm and shift of the patterns. For more details see appendix, Section A.6. 4 EXPERIMENTS We show that our proposed Hopfield layers can be applied successfully to a wide range of tasks. The tasks are from natural language processing, contain multiple instance learning problems, a collection of small classification tasks, and drug design problems. Analysis of transformer and BERT models. Transformer and BERT models can be implemented by the layer Hopfield. The kind of fixed point of the Hopfield net is determined by how the pattern xi is separated from others patterns. (a) a global fixed point: no separation of a pattern from the others, (b) a fixed point close to a single pattern: pattern is separated from other patterns, (c) metastable state: some patterns are similar to each other and well separated from all other vectors. We observed that the attention heads of transformer and BERT models are predominantly in metastable states, which are categorized into four classes: (I) averaging over a very large number of patterns (very large metastable state or fixed point (a)), (II) averaging over a large number of patterns (large metastable state), (III) averaging over a medium number of patterns (medium metastable state), (IV) averaging over a small number of patterns (small metastable state or fixed point (c)). For analyzing the metastable states, we calculated the minimal number k of softmax values required to sum up to 0.90. Hence, k indicates the size of a metastable state. To determine in which of the four classes a head is mainly operating, we computed the distribution of k across sequences. Concretely, for N tokens and for k as the median of the distribution, a head is classified as operating in class (I) if 1/2N < k, as operating in class (II) if 1/8N < k 1/2N, as operating in class (III) if 1/32N < k 1/8N, and as operating in class (IV) if k 1/32N. We analyzed pre-trained BERT models from Hugging Face Inc. (Wolf et al., 2019) according to these operating classes. In Fig. A.3 in the appendix the distribution of the pre-trained bert-base-cased model is depicted (for other models see appendix Section A.5.1.4). Operating classes (II) (large metastable states) and (IV) (small metastable states) are often observed in the middle layers. Operating class (I) (averaging over a very large number of patterns) is abundant in lower layers. Similar observations have been reported in other studies (Toneva & Wehbe, 2019a;b; Tay et al., 2020). Operating class (III) (medium metastable states) is predominant in the last layers. Multiple Instance Learning Datasets. For multiple instance learning (MIL) (Dietterich et al., 1997), we integrate our new Hopfield network via the layer Hopfield Pooling into deep learning architectures. Recently, deep learning methods have been applied to MIL problems (Ilse et al., 2018), but still the performance on many datasets lacks improvement. Thus, MIL datasets still pose an interesting challenge, in which Hopfield layers equipped with memory are a promising approach. Immune Repertoire Classification. The first MIL task is immune repertoire classification, where a deep learning architecture with Hopfield Pooling (Deep RC) was used (Widrich et al., 2020a;b). Immune repertoire classification (Emerson et al., 2017) typically requires to extract few patterns from a large set of sequences, the repertoire, that are indicative for the respective immune status. The datasets contain 300,000 instances per immune repertoire, which represents one of the largest multiple instance learning experiments ever conducted (Carbonneau et al., 2018). Most MIL methods fail due the large number of instances. This experiment comprises real-world and simulated datasets. Simulated datasets are generated by implanting sequence motifs (Akbar et al., 2019; Weber et al., 2020) with low frequency into simulated or experimentally-observed immune receptor sequences. The performance of Deep RC was compared with other machine learning methods: (i) known motif, (ii) SVM using k-mers and Min Max or Jaccard kernel, (iii) K-Nearest Neighbor (KNN) with kmers, (iv) logistic regression with k-mers, (v) burden test with k-mers, and (vi) logistic multiple Method tiger fox elephant UCSB Hopfield (ours) 91.3 0.5 64.05 0.4 94.9 0.3 89.5 0.8 Path encoding (Küçüka scı & Baydo gan, 2018) 91.0 1.0a 71.2 1.4a 94.4 0.7a 88.0 2.2a MIn D (Cheplygina et al., 2016) 85.3 1.1a 70.4 1.6a 93.6 0.9a 83.1 2.7a MILES (Chen et al., 2006) 87.2 1.7b 73.8 1.6a 92.7 0.7a 83.3 2.6a APR (Dietterich et al., 1997) 77.8 0.7b 54.1 0.9b 55.0 1.0b Citation-k NN (Wang, 2000) 85.5 0.9b 63.5 1.5b 89.6 0.9b 70.6 3.2a DD (Maron & Lozano-Pérez, 1998) 84.1b 63.1b 90.7b Table 1: Results for MIL datasets Tiger, Fox, Elephant, and UCSB Breast Cancer in terms of AUC. Results for all methods except the first are taken from either a(Küçüka scı & Baydo gan, 2018) or b(Carbonneau et al., 2016), depending on which reports the higher AUC. instance learning (l MIL). On the real-world dataset Deep RC achieved an AUC of 0.832 0.022, followed by the SVM with Min Max kernel (AUC 0.825 0.022) and the burden test with an AUC of 0.699 0.041. Across datasets, Deep RC outperformed all competing methods with respect to average AUC (Widrich et al., 2020a;b). MIL benchmark datasets. We apply Hopfield layers to further MIL datasets (Ilse et al., 2018; Küçüka scı & Baydo gan, 2018; Cheplygina et al., 2016): Elephant, Fox and Tiger for image annotation (Andrews et al., 2003). These datasets consist of color images from the Corel dataset that have been preprocessed and segmented. An image consists of a set of segments (or blobs), each characterized by color, texture and shape descriptors. The datasets have 100 positive and 100 negative example images. The latter have been randomly drawn from a pool of photos of other animals. Elephant comprises 1,391 instances and 230 features, Fox 1,320 instances and 230 features, and Tiger has 1,220 instances and 230 features. Furthermore, we use the UCSB breast cancer classification (Kandemir et al., 2014) dataset, which consists of 2,002 instances across 58 input objects. An instance represents a patch of a histopathological image of cancerous or normal tissue. The layer Hopfield Pooling is used, which allows for computing a per-input-object representation by extracting an average of instances that are indicative for one of the two classes. The input to the layer Hopfield Pooling is a set of embedded instances Y . A trainable but fixed state (query) pattern Q is used for averaging over class-indicative instances. This averaging enables a compression of variable-sized bags to a fixedsized representation to discriminate the bags. More details in appendix Sec. A.5.2. Our approach has set a new state-of-the-art and has outperformed other methods (Küçüka scı & Baydo gan, 2018; Carbonneau et al., 2016) on the datasets Tiger, Elephant and UCSB Breast Cancer (see Table 1). UCI Benchmark Collection. So far deep learning struggled with small datasets. However, Hopfield networks are promising for handling small datasets, since they can store the training data points or their representations to perform similarity-based, nearest neighbor, or learning vector quantization methods. Therefore, we test the Hopfield layer Hopfield on the small datasets of the UC Irvine (UCI) Machine Learning Repository that have been used to benchmark supervised learning methods (Fernández-Delgado et al., 2014; Wainberg et al., 2016; Khan et al., 2018) and also feed-forward neural networks (Klambauer et al., 2017a; Wu et al., 2018), where our Hopfield networks could exploit their memory. The whole 121 datasets in the collection vary strongly with respect to their size, number of features, and difficulties (Fernández-Delgado et al., 2014), such that they have been divided into 75 small datasets with less than 1,000 samples and 45 large datasets with more than or equal to 1,000 samples in Klambauer et al. (2017a). Method avg. rank diff. p-value Hopfield (ours) 3.92 SVM 3.23 0.15 SNN 2.85 0.10 Random Forest 2.79 0.05 ... ... ... Stacking 8.73 1.2e 11 Table 2: Results on 75 small datasets of the UCI benchmarks given as difference to average rank. On the 75 small datasets, Random Forests (RFs) and Support Vector Machines (SVM) are highly accurate, whereas on the large datasets, deep learning methods and neural networks are in the lead (Klambauer et al., 2017a;b; Wu et al., 2018). We applied a modern Hopfield network via the layer Hopfield Layer, where a selfnormalizing net (SNN) maps the input vector to Y and R. The output Z of Hopfield Layer enters a softmax output. We compared our modern Hopfield networks against deep learning methods (e.g. SNNs, resnet), RFs, SVMs, boosting, bagging, and many other machine learning methods of Fernández-Delgado et al. (2014). Since for each method, multiple variants and implementations had been included, we used method groups and representatives as defined by Klambauer et al. (2017a). For each dataset, a ranking of the methods was calculated which is presented in Table 2. We found that Hopfield networks outperform all other methods on the small datasets, setting a new state-of-the-art for 10 datasets. The difference is significant except for the first three runner-up methods (Wilcoxon signed rank test). See appendix Section A.5.3 for details. Drug Design Benchmark Datasets. We test the Hopfield layer Hopfield Layer, on four drug design datasets. These datasets represent four main areas of modeling tasks in drug design, concretely to develop accurate models for predicting a) new anti-virals (HIV) by the Drug Therapeutics Program (DTP) AIDS Antiviral Screen, b) new protein inhibitors, concretely human β-secretase (BACE) inhibitors by Subramanian et al. (2016), c) metabolic effects as blood-brain barrier permeability (BBBP) (Martins et al., 2012) and d) side effects of a chemical compound from the Side Effect Resource (SIDER) Kuhn et al. (2016). We applied the Hopfield layer Hopfield Layer, where the training data is used as stored patterns Y , the input vector as state pattern R, and the corresponding training label to project the output of the Hopfield layer Y WV . Our architecture with Hopfield Layer has reached state-of-the-art for predicting side effects on SIDER 0.672 0.019 as well as for predicting β-secretase BACE 0.902 0.023. For details, see Table A.5 in the appendix. Conclusion. We have introduced a modern Hopfield network with continuous states and the corresponding new update rule. This network can store exponentially many patterns, retrieves patterns with one update, and has exponentially small retrieval errors. We analyzed the attention heads of BERT models. The new modern Hopfield networks have been integrated into deep learning architectures as layers to allow the storage of and access to raw input data, intermediate results, or learned prototypes. These Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms. Hopfield layers that equip neural network layers with memories improved state-of-the-art in three out of four considered multiple instance learning problems and on immune repertoire classification, and on two drug design dataset. They yielded the best results among different machine learning methods on the UCI benchmark collections of small classification tasks. ACKNOWLEDGMENTS The ELLIS Unit Linz, the LIT AI Lab and the Institute for Machine Learning are supported by the Land Oberösterreich, LIT grants Deep Tox Gen (LIT-2017-3-YOU-003), and AI-SNN (LIT2018-6-YOU-214), the Medical Cognitive Computing Center (MC3), Janssen Pharmaceutica, UCB Biopharma, Merck Group, Audi.JKU Deep Learning Center, Audi Electronic Venture Gmb H, TGW, Primal, S3AI (FFG-872172), Silicon Austria Labs (SAL), Anyline, FILL, Enlite AI, Google Brain, ZF Friedrichshafen AG, Robert Bosch Gmb H, TÜV Austria, DCS, and the NVIDIA Corporation. IARAI is supported by Here Technologies. This appendix consists of six sections (A.1 A.6). Section A.1 introduces the new modern Hopfield network with continuous states and its update rule. Furthermore, Section A.1 provides a thorough and profound theoretical analysis of this new Hopfield network. Section A.2 provides the mathematical background for Section A.1. Section A.3 reviews binary Modern Hopfield Networks of Krotov & Hopfield. Section A.4 shows that the Hopfield update rule is the attention mechanism of the transformer. Section A.5 gives details on the experiments. Section A.6 describes the Py Torch implementation of layers based on the new Hopfield networks and how to use them. CONTENTS OF THE APPENDIX A.1 Continuous State Modern Hopfield Networks (A New Concept) . . . . . . . . . . 12 A.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 A.1.2 New Energy Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.1.3 New Update Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.1.4 Global Convergence of the Update Rule . . . . . . . . . . . . . . . . . . . 16 A.1.5 Local Convergence of the Update Rule: Fixed Point Iteration . . . . . . . . 19 A.1.6 Properties of Fixed Points Near Stored Pattern . . . . . . . . . . . . . . . 44 A.1.7 Learning Associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 A.1.8 Infinite Many Patterns and Forgetting Patterns . . . . . . . . . . . . . . . . 60 A.1.9 Number of Spurious States . . . . . . . . . . . . . . . . . . . . . . . . . . 61 A.2 Properties of Softmax, Log-Sum-Exponential, Legendre Transform, Lambert W Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 A.3 Modern Hopfield Networks: Binary States (Krotov and Hopfield) . . . . . . . . . . 70 A.3.1 Modern Hopfield Networks: Introduction . . . . . . . . . . . . . . . . . . 70 A.3.2 Energy and Update Rule for Binary Modern Hopfield Networks . . . . . . 71 A.4 Hopfield Update Rule is Attention of The Transformer . . . . . . . . . . . . . . . 73 A.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 A.5.1 Experiment 1: Attention in Transformers described by Hopfield dynamics . 73 A.5.2 Experiment 2: Multiple Instance Learning Datasets. . . . . . . . . . . . . 78 A.5.3 Experiment 3: Classification on Small UCI Benchmark Datasets . . . . . . 81 A.5.4 Experiment 4: Drug Design Benchmark Datasets . . . . . . . . . . . . . . 83 A.6 Py Torch Implementation of Hopfield Layers . . . . . . . . . . . . . . . . . . . . . 84 A.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 A.6.2 Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 A.6.3 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 LIST OF THEOREMS A1 Theorem (Global Convergence (Zangwill): Energy) . . . . . . . . . . . . . . . . . 16 A2 Theorem (Global Convergence: Stationary Points) . . . . . . . . . . . . . . . . . . 18 A3 Theorem (Storage Capacity (M=2): Placed Patterns) . . . . . . . . . . . . . . . . 46 A4 Theorem (Storage Capacity (M=5): Placed Patterns) . . . . . . . . . . . . . . . . 47 A5 Theorem (Storage Capacity (Main): Random Patterns) . . . . . . . . . . . . . . . 49 A6 Theorem (Storage Capacity (d computed): Random Patterns) . . . . . . . . . . . . 52 A7 Theorem (Storage Capacity (expected separation): Random Patterns) . . . . . . . . 55 A8 Theorem (Pattern Retrieval with One Update) . . . . . . . . . . . . . . . . . . . . 56 A9 Theorem (Exponentially Small Retrieval Error) . . . . . . . . . . . . . . . . . . . 57 A10 Theorem (Storage Capacity for Binary Modern Hopfield Nets (Demircigil et al. 2017)) 72 LIST OF DEFINITIONS A1 Definition (Softmax) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 A2 Definition (Log-Sum-Exp Function) . . . . . . . . . . . . . . . . . . . . . . . . . 62 A3 Definition (Convex Conjugate) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 A4 Definition (Legendre Transform) . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 A5 Definition (Epi-Sum) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 A6 Definition (Lambert Function) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 LIST OF FIGURES A.1 The three cases of fixed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.2 From binary Hopfield network to transformer . . . . . . . . . . . . . . . . . . . . 73 A.4 Ridge plots of the distribution of counts . . . . . . . . . . . . . . . . . . . . . . . 76 A.5 Change of count density during training . . . . . . . . . . . . . . . . . . . . . . . 77 A.6 Attentions of a Gaussian averaging heads . . . . . . . . . . . . . . . . . . . . . . 78 A.7 A flowchart of the Hopfield layer . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 LIST OF TABLES A.1 Results of immune repertoire classification across all datasets . . . . . . . . . . . . 79 A.2 Hyperparameter selection for MIL datasets . . . . . . . . . . . . . . . . . . . . . 80 A.3 Hyperparameter selection for small UCI benchmark datasets . . . . . . . . . . . . 82 A.4 Hyperparameter selection for drug design datasets . . . . . . . . . . . . . . . . . . 83 A.5 Results on drug design benchmark datasets . . . . . . . . . . . . . . . . . . . . . 84 A.1 CONTINUOUS STATE MODERN HOPFIELD NETWORKS (A NEW CONCEPT) A.1.1 INTRODUCTION In Section A.1 our new modern Hopfield network is introduced. In Subsection A.1.2 we present the new energy function. Then in Subsection A.1.3, our new update rule is introduced. In Subsection A.1.4, we show that this update rule ensures global convergence. We show that all the limit points of any sequence generated by the update rule are the stationary points (local minima or saddle points) of the energy function. In Section A.1.5, we consider the local convergence of the update rule and see that patterns are retrieved with one update. In Subsection A.1.6, we consider the properties of the fixed points that are associated with the stored patterns. In Subsection A.1.6.1, we show that exponentially many patterns can be stored. The main result is given in Theorem A5: For random patterns on a sphere we can store and retrieve exponentially (in the dimension of the Hopfield space) many patterns. Subsection A.1.6.2 reports that patterns are typically retrieved with one update step and that the retrieval error is exponentially small. In Subsection A.1.7, we consider how associations for the new Hopfield networks can be learned. In Subsection A.1.7.2, we analyze if the association is learned directly by a bilinear form. In Subsection A.1.7.3, we analyze if stored patterns and query patterns are mapped to the space of the Hopfield network. Therefore, we treat the architecture of the transformer and BERT. In Subsection A.1.8, we introduce a temporal component into the new Hopfield network that leads to a forgetting behavior. The forgetting allows us to treat infinite memory capacity in Subsection A.1.8.1. In Subsection A.1.8.2, we consider the controlled forgetting behavior. In Section A.2, we provide the mathematical background that is needed for our proofs. In particular we give lemmas on properties of the softmax, the log-sum-exponential, the Legendre transform, and the Lambert W function. In Section A.3, we review the new Hopfield network as introduced by Krotov and Hopfield in 2016. However in contrast to our new Hopfield network, the Hopfield network of Krotov and Hopfield is binary, that is, a network with binary states. In Subsection A.3.1, we give an introduction to neural networks equipped with associative memories and new Hopfield networks. In Subsection A.3.1.1, we discuss neural networks that are enhanced by an additional external memory and by attention mechanisms. In Subsection A.3.1.2, we give an overview over the modern Hopfield networks. Finally, in Subsection A.3.2, we present the energy function and the update rule for the modern, binary Hopfield networks. A.1.2 NEW ENERGY FUNCTION We have patterns x1, . . . , x N that are represented by the matrix X = (x1, . . . , x N) . (11) The largest norm of a pattern is M = max i xi . (12) The query or state of the Hopfield network is ξ. The energy function E in the new type of Hopfield models of Krotov and Hopfield is E = PN i=1 F ξT xi for binary patterns xi and binary state ξ with interaction function F(x) = xn, where n = 2 gives classical Hopfield model (Krotov & Hopfield, 2016). The storage capacity is proportional to dn 1 (Krotov & Hopfield, 2016). This model was generalized by Demircigil et al. (Demircigil et al., 2017) to exponential interaction functions F(x) = exp(x), which gives the energy E = exp(lse(1, XT ξ)). This energy leads to an exponential storage capacity of N = 2d/2 for binary patterns. Furthermore, with a single update the fixed point is recovered with high probability. See more details in Section A.3. In contrast to the these binary modern Hopfield networks, we focus on modern Hopfield networks with continuous states that can store continuous patterns. We generalize the energy of Demircigil et al. (Demircigil et al., 2017) to continuous states while keeping the lse properties which ensure high storage capacity and fast convergence. Our new energy E for a continuous query or state ξ is defined E = lse(β, XT ξ) + 1 2ξT ξ + β 1 ln N + 1 i=1 exp(βx T i ξ) + β 1 ln N + 1 2 β M 2 xi 2 exp 1 2 β xi ξ 2 ! First let us collect and prove some properties of E. The next lemma gives bounds on the energy E. Lemma A1. The energy E is larger than zero: 0 E . (16) For ξ in the simplex defined by the patterns, the energy E is upper bounded by: E β 1 ln N + 1 2 M 2 , (17) E 2 M 2 . (18) Proof. We start by deriving the lower bound of zero. The pattern most similar to query or state ξ is xξ: xξ = xk , k = arg max i ξT xi . (19) i=1 exp(βx T i ξ) + β 1 ln N + 1 i=1 exp(βx T i ξ) i=1 exp(βx T i ξ) β 1 ln exp(βx T ξ ξ) + 1 = x T ξ ξ + 1 2 (ξ xξ)T (ξ xξ) = 1 2 ξ xξ 2 0 . The energy is zero and, therefore, the bound attained, if all xi are equal, that is, xi = x for all i and ξ = x. For deriving upper bounds on the energy E, we require the the query ξ to be in the simplex defined by the patterns, that is, i=1 pi xi , i=1 pi = 1 , i : 0 pi . (21) The first upper bound is. i=1 exp(βx T i ξ) 2 ξT ξ + β 1 ln N + 1 i=1 pi (x T i ξ) + 1 2 ξT ξ + β 1 ln N + 1 2 ξT ξ + β 1 ln N + 1 2 M 2 β 1 ln N + 1 For the first inequality we applied Lemma A19 to lse(β, XT ξ) with z = p giving lse(β, XT ξ) i=1 pi (x T i ξ) + β 1 N X i=1 pi ln pi i=1 pi (x T i ξ) , (23) as the term involving the logarithm is non-positive. Next we derive the second upper bound, for which we need the mean mx of the patterns i=1 xi . (24) i=1 exp(βx T i ξ) 2 ξT ξ + β 1 ln N + 1 1 N x T i ξ + 1 = m T xξ + 1 where for the first inequality we again applied Lemma A19 with z = (1/N, . . . , 1/N) and β 1 P i 1/N ln(1/N) = β 1 ln(N). This inequality also follows from Jensen s inequality. The second inequality uses the Cauchy-Schwarz inequality. The last inequality uses i pi M = M (26) i (1/N) xi X i (1/N) M = M . (27) A.1.3 NEW UPDATE RULE We now introduce an update rule for minimizing the energy function E. The new update rule is ξnew = Xp = Xsoftmax(βXT ξ) , (28) where we used p = softmax(βXT ξ) . (29) The new state ξnew is in the simplex defined by the patterns, no matter what the previous state ξ was. For comparison, the synchronous update rule for the classical Hopfield network with threshold zero is ξnew = sgn (XXT ξ) . (30) Therefore, instead of using the vector XT ξ as in the classical Hopfield network, its softmax version softmax(βXT ξ) is used. In the next section (Section A.1.4) we show that the update rule Eq. (28) ensures global convergence. We show that all the limit points of any sequence generated by the update rule are the stationary points (local minima or saddle points) of the energy function E. In Section A.1.5 we consider the local convergence of the update rule Eq. (28) and see that patterns are retrieved with one update. A.1.4 GLOBAL CONVERGENCE OF THE UPDATE RULE We are interested in the global convergence, that is, convergence from each initial point, of the iteration ξnew = f(ξ) = Xp = Xsoftmax(βXT ξ) , (31) where we used p = softmax(βXT ξ) . (32) We defined the energy function E = lse(β, XT ξ) + 1 2ξT ξ + β 1 ln N + 1 i=1 exp(βx T i ξ) + β 1 ln N + 1 2M 2 . (34) We will show that the update rule in Eq. (31) is the Concave-Convex Procedure (CCCP) for minimizing the energy E. The CCCP is proven to converge globally. Theorem A1 (Global Convergence (Zangwill): Energy). The update rule Eq. (31) converges globally: For ξt+1 = f(ξt), the energy E(ξt) E(ξ ) for t and a fixed point ξ . Proof. The Concave-Convex Procedure (CCCP) (Yuille & Rangarajan, 2002; 2003) minimizes a function that is the sum of a concave function and a convex function. CCCP is equivalent to Legendre minimization (Rangarajan et al., 1996; 1999) algorithms (Yuille & Rangarajan, 2003). The Jacobian of the softmax is positive semi-definite according to Lemma A22. The Jacobian of the softmax is the Hessian of the lse, therefore lse is a convex and lse a concave function. Therefore, the energy function E(ξ) is the sum of the convex function E1(ξ) = 1/2ξT ξ + C1 and the concave function E2(ξ) = lse: E(ξ) = E1(ξ) + E2(ξ) , (35) 2ξT ξ + β 1 ln N + 1 2ξT ξ + C1 , (36) E2(ξ) = lse(β, XT ξ) , (37) where C1 does not depend on ξ. The Concave-Convex Procedure (CCCP) (Yuille & Rangarajan, 2002; 2003) applied to E is ξE1 ξt+1 = ξE2 ξt , (38) ξt+1 = ξlse(β, XT ξt) . (39) The resulting update rule is ξt+1 = Xpt = Xsoftmax(βXT ξt) (40) pt = softmax(βXT ξt) . (41) This is the update rule in Eq. (31). Theorem 2 in Yuille & Rangarajan (2002) and Theorem 2 in Yuille & Rangarajan (2003) state that the update rule Eq. (31) is guaranteed to monotonically decrease the energy E as a function of time. See also Theorem 2 in Sriperumbudur & Lanckriet (2009). Although the objective converges in all cases, it does not necessarily converge to a local minimum (Lipp & Boyd, 2016). However the convergence proof of CCCP in Yuille & Rangarajan (2002; 2003) was not as rigorous as required. In Sriperumbudur & Lanckriet (2009) a rigorous analysis of the convergence of CCCP is performed using Zangwill s global convergence theory of iterative algorithms. In Sriperumbudur & Lanckriet (2009) the minimization problem min ξ E1 + E2 (42) s.t. c(ξ) 0 , d(ξ) = 0 is considered with E1 convex, E2 convex, c component-wise convex function, and d an affine function. The CCCP algorithm solves this minimization problem by linearization of the concave part and is defined in Sriperumbudur & Lanckriet (2009) as ξt+1 arg min ξ E1 (ξ) + ξT ξE2 ξt (43) s.t. c(ξ) 0 , d(ξ) = 0 . We define the upper bound EC on the energy: EC ξ, ξt := E1 (ξ) + E2 ξt + ξ ξt T ξE2 ξt . (44) EC is equal to the energy E (ξt) for ξ = ξt: EC ξt, ξt = E1 ξt + E2 ξt = E ξt . (45) Since E2 is convex, the first order characterization of convexity holds (Eq. 3.2 in Boyd & Vandenberghe (2009)): E2 (ξ) E2 ξt ξ ξt T ξE2 ξt , (46) E2 (ξ) E2 ξt + ξ ξt T ξE2 ξt . (47) Therefore, for ξ = ξt the function EC is an upper bound on the energy: E (ξ) EC ξ, ξt = E1 (ξ) + E2 ξt + ξ ξt T ξE2 ξt (48) = E1 (ξ) + ξT ξE2 ξt + C2 , where C2 does not depend on ξ. Since we do not have constraints, ξt+1 is defined as ξt+1 arg min ξ EC ξ, ξt , (49) hence EC ξt+1, ξt EC (ξt, ξt). Combining the inequalities gives: E ξt+1 EC ξt+1, ξt EC ξt, ξt = E ξt . (50) Since we do not have constraints, ξt+1 is the minimum of EC ξ, ξt = E1 (ξ) + ξT ξE2 ξt + C2 (51) as a function of ξ. For a minimum not at the border, the derivative has to be the zero vector ξ = ξ + ξE2 ξt = ξ Xsoftmax(βXT ξt) = 0 (52) and the Hessian must be positive semi-definite 2EC (ξ, ξt) ξ2 = I . (53) The Hessian is strict positive definite everywhere, therefore the optimization problem is strict convex (if the domain is convex) and there exist only one minimum, which is a global minimum. EC can even be written as a quadratic form: EC ξ, ξt = 1 2 ξ + ξE2 ξt T ξ + ξE2 ξt + C3 , (54) where C3 does not depend on ξ. Therefore, the minimum is ξt+1 = ξE2 ξt = Xsoftmax(βXT ξt) (55) if it is in the domain as we assume. Using M = maxi xi , ξt+1 is in the sphere S = {x | x M} which is a convex and compact set. Hence, if ξ0 S, then the iteration is a mapping from S to S. Therefore, the point-set-map defined by the iteration Eq. (55) is uniformly compact on S according to Remark 7 in Sriperumbudur & Lanckriet (2009). Theorem 2 and Theorem 4 in (Sriperumbudur & Lanckriet, 2009) states that all the limit points of the iteration Eq. (55) are stationary points. These theorems follow from Zangwill s global convergence theorem: Convergence Theorem A, page 91 in Zangwill (1969) and page 3 in Wu (1983). The global convergence theorem only assures that for the sequence ξt+1 = f(ξt) and a function Φ we have Φ(ξt) Φ(ξ ) for t but not ξt ξ . However, if f is strictly monotone with respect to Φ, then we can strengthen Zangwill s global convergence theorem (Meyer, 1976). We set Φ = E and show E(ξt+1) < E(ξt) if ξt is not a stationary point of E, that is, f is strictly monotone with respect to E. The following theorem is similar to the convergence results for the expectation maximization (EM) algorithm in Wu (1983) which are given in theorems 1 to 6 in Wu (1983). The following theorem is also very similar to Theorem 8 in Sriperumbudur & Lanckriet (2009). Theorem A2 (Global Convergence: Stationary Points). For the iteration Eq. (55) we have E (ξt) E (ξ ) = E as t , for some stationary point ξ . Furthermore ξt+1 ξt 0 and either {ξt} t=0 converges or, in the other case, the set of limit points of {ξt} t=0 is a connected and compact subset of L (E ), where L (a) = {ξ L | E (ξ) = a} and L is the set of stationary points of the iteration Eq. (55). If L (E ) is finite, then any sequence {ξt} t=0 generated by the iteration Eq. (55) converges to some ξ L (E ). Proof. We have E (ξt) = E1 (ξt) + E2 (ξt). The gradient ξE2 (ξt) = ξlse(β, XT ξ) is continuous. Therefore, Eq. (51) has minimum in the sphere S, which is a convex and compact set. If ξt+1 = ξt, then ξt was not the minimum of Eq. (48) as the derivative at ξt is not equal to zero. Eq. (53) shows that the optimization problem Eq. (48) is strict convex, hence it has only one minimum, which is a global minimum. Eq. (54) shows that the optimization problem Eq. (48) is even a quadratic form. Therefore, we have E ξt+1 EC ξt+1, ξt < EC ξt, ξt = E ξt . (56) Therefore, the point-set-map defined by the iteration Eq. (55) (for definitions see (Sriperumbudur & Lanckriet, 2009)) is strictly monotonic with respect to E. Therefore, we can apply Theorem 3 in Sriperumbudur & Lanckriet (2009) or Theorem 3.1 and Corollary 3.2 in Meyer (1976), which give the statements of the theorem. We showed global convergence of the iteration Eq. (31). We have shown that all the limit points of any sequence generated by the iteration Eq. (31) are the stationary points (critical points; local minima or saddle points) of the energy function E. Local maxima as stationary points are only possible if the iterations exactly hits a local maximum. However, convergence to a local maximum without being there is not possible because Eq. (56) ensures a strict decrease of the energy E. Therefore, almost sure local maxima are not obtained as stationary points. Either the iteration converges or, in the second case, the set of limit points is a connected and compact set. But what happens if ξ0 is in an ϵ-neighborhood around a local minimum ξ ? Will the iteration Eq. (31) converge to ξ ? What is the rate of convergence? These questions are about local convergence which will be treated in detail in next section. A.1.5 LOCAL CONVERGENCE OF THE UPDATE RULE: FIXED POINT ITERATION For the proof of local convergence to a fixed point we will apply Banach fixed point theorem. For the rate of convergence we will rely on properties of a contraction mapping. A.1.5.1 General Bound on the Jacobian of the Iteration. We consider the iteration ξnew = f(ξ) = Xp = Xsoftmax(βXT ξ) (57) p = softmax(βXT ξ) . (58) The Jacobian J is symmetric and has the following form: ξ = β X diag(p) pp T XT = XJs XT , (59) where Js is Jacobian of the softmax. To analyze the local convergence of the iteration, we distinguish between the following three cases (see also Fig. A.1). Here we only provide an informal discussion to give the reader some intuition. A rigorous formulation of the results can be found in the corresponding subsections. a) If the patterns xi are not well separated, the iteration goes to a fixed point close to the arithmetic mean of the vectors. In this case p is close to pi = 1/N. b) If the patterns xi are well separated, then the iteration goes to the pattern to which the initial ξ is similar. If the initial ξ is similar to a vector xi then it will converge to a vector close to xi and p will converge to a vector close to ei. c) If some vectors are similar to each other but well separated from all other vectors, then a so called metastable state between the similar vectors exists. Iterations that start near the metastable state converge to this metastable state. fixed point average pattern Figure A.1: The three cases of fixed points. a) Stored patterns (fixed point is single pattern): patterns are stored if they are well separated. Each pattern xi has a single fixed point x i close to it. In the sphere Si, pattern xi is the only pattern and x i the only fixed point. b) Metastable state (fixed point is average of similar patterns): xi and xj are similar to each other and not well separated. The fixed point m x is a metastable state that is close to the mean mx of the similar patterns. c) Global fixed point (fixed point is average of all patterns): no pattern is well separated from the others. A single global fixed point m x exists that is close to the arithmetic mean mx of all patterns. We begin with a bound on the Jacobian of the iteration, thereby heavily relying on the Jacobian of the softmax from Lemma A24. Lemma A2. For N patterns X = (x1, . . . , x N), p = softmax(βXT ξ), M = maxi xi , and m = maxi pi(1 pi), the spectral norm of the Jacobian J of the fixed point iteration is bounded: J 2 2 β X 2 2 m 2 β N M 2 m . (60) If pmax = maxi pi 1 ϵ, then for the spectral norm of the Jacobian holds J 2 2 β N M 2 ϵ 2 ϵ2 β N M 2 < 2 β N M 2 ϵ . (61) Proof. With p = softmax(βXT ξ) , (62) the symmetric Jacobian J is ξ = β X diag(p) pp T XT = XJs XT , (63) where Js is Jacobian of the softmax. With m = maxi pi(1 pi), Eq. (476) from Lemma A24 is Js 2 = β diag(p) pp T 2 2 m β . (64) Using this bound on Js 2, we obtain J 2 β XT 2 Js 2 X 2 2 m β X 2 2 . (65) The spectral norm . 2 is bounded by the Frobenius norm . F which can be expressed by the norm squared of its column vectors: X 2 X F = s X i xi 2 . (66) Therefore, we obtain the first statement of the lemma: J 2 2 β X 2 2 m 2 β N M 2 m . (67) With pmax = maxi pi 1 ϵ Eq. (480) in Lemma A24 is Js 2 2 β ϵ 2 ϵ2 β < 2 β ϵ . (68) Using this inequality, we obtain the second statement of the lemma: J 2 2 β N M 2 ϵ 2 ϵ2 β N M 2 < 2 β N M 2 ϵ . (69) We now define the separation i of a pattern xi from data X = (x1, . . . , x N) here, since it has an important role for the convergence properties of the iteration. Definition 2 (Separation of Patterns). We define i, i.e. the separation of pattern xi from data X = (x1, . . . , x N) as: i = min j,j =i x T i xi x T i xj = x T i xi max j,j =i x T i xj . (70) The pattern is separated from the other data if 0 < i. Using the parallelogram identity, i can also be expressed as i = min j,j =i 1 2 xi 2 xj 2 + xi xj 2 (71) 2 max j,j =i xj 2 xi xj 2 . For xi = xj we have i = 1/2 minj,j =i xi xj 2. Analog we say for a query ξ and data X = (x1, . . . , x N), that xi is least separated from ξ while being separated from other xj with j = i if i = arg max k min j,j =k ξT xk ξT xj = arg max k ξT xk max j,j =k ξT xj 0 c = max k min j,j =k ξT xk ξT xj = max k ξT xk max j,j =k ξT xj Next we consider the case where the iteration has only one stable fixed point. A.1.5.2 One Stable State: Fixed Point Near the Mean of the Patterns. We start with the case where no pattern is well separated from the others. Global fixed point near the global mean: Analysis using the data center. We revisit the bound on the Jacobian of the iteration by utilizing properties of pattern distributions. We begin with a probabilistic interpretation where we consider pi as the probability of selecting the vector xi. Consequently, we define expectations as Ep[f(x)] = PN i=1 pif(xi). In this setting the matrix X diag(p) pp T XT (74) is the covariance matrix of data X when its vectors are selected according to the probability p: X diag(p) pp T XT = Xdiag(p)XT Xpp T XT (75) i=1 pi xi x T i = Ep[x x T ] Ep[x] Ep[x]T = Varp[x] , (77) therefore we have J = β Varp[x] . (78) The largest eigenvalue of the covariance matrix (equal to the largest singular value) is the variance in the direction of the eigenvector associated with the largest eigenvalue. i=1 xi , (79) mmax = max 1 i N xi mx 2 . (80) mx is the arithmetic mean (the center) of the patterns. mmax is the maximal distance of the patterns to the center mx . The variance of the patterns is i=1 pi xi x T i The maximal distance to the center mmax allows the derivation of a bound on the norm of the Jacobian. Next lemma gives a condition for a global fixed point. Lemma A3. The following bound on the norm J 2 of the Jacobian of the fixed point iteration f holds independent of p or the query ξ. J 2 β m2 max . (82) For β m2 max < 1 there exists a unique fixed point (global fixed point) of iteration f in each compact set. Proof. In order to bound the variance we compute the vector a that minimizes i=1 pi xi a 2 = i=1 pi(xi a)T (xi a) . (83) The solution to i=1 pi(a xi) = 0 (84) i=1 pixi . (85) The Hessian of f is positive definite since i=1 pi I = 2 I (86) and f is a convex function. Hence, the mean i=1 pi xi (87) minimizes PN i=1 pi xi a 2. Therefore, we have i=1 pi xi x 2 i=1 pi xi mx 2 m2 max . (88) Let us quickly recall that the spectral norm of an outer product of two vectors is the product of the Euclidean norms of the vectors: ab T 2 = q λmax(ba T ab T ) = a q λmax(bb T ) = a b , (89) since bb T has eigenvector b/ b with eigenvalue b 2 and otherwise zero eigenvalues. We now bound the variance of the patterns: i=1 pi (xi x) (xi x)T 2 (90) i=1 pi xi x 2 i=1 pi xi mx 2 m2 max . The bound of the lemma on J 2 follows from Eq. (78). For J 2 β m2 max < 1 we have a contraction mapping on each compact set. Banach fixed point theorem says there is a unique fixed point in the compact set. Now let us further investigate the tightness of the bound on Varp[x] 2 via xi x 2: we consider the trace, which is the sum Pd k=1 ek of the w.l.o.g. ordered nonnegative eigenvalues ek of Varp[x] The spectral norm is equal to the largest eigenvalue e1, which is equal to the largest singular value, as we have positive semidefinite matrices. We obtain: Varp[x] 2 = Tr i=1 pi (xi x) (xi x)T ! k=2 ek (91) i=1 pi Tr (xi x) (xi x)T i=1 pi xi x 2 Therefore, the tightness of the bound depends on eigenvalues which are not the largest. Hence variations which are not along the largest variation weaken the bound. Next we investigate the location of fixed points which existence is ensured by the global convergence stated in Theorem A2. For N patterns X = (x1, . . . , x N), we consider the iteration ξnew = f(ξ) = Xp = Xsoftmax(βXT ξ) (92) p = softmax(βXT ξ) . (93) ξnew is in the simplex of the patterns, that is, ξnew = P i pixi with P i pi = 1 and 0 pi. Hence, after one update ξ is in the simplex of the pattern and stays there. If the center mx is the zero vector mx = 0, that is, the data is centered, then the mean is a fixed point of the iteration. For ξ = mx = 0 we have p = 1/N 1 (94) ξnew = 1/N X 1 = mx = ξ . (95) In particular normalization methods like batch normalization would promote the mean as a fixed point. We consider the differences of dot products for xi: x T i xi x T i xj = x T i (xi xj), for fixed point m x: (m x)T xi (m x)T xj = (m x)T (xi xj), and for the center mx: m T xxi m T xxj = m T x(xi xj). Using the Cauchy-Schwarz inequality, we get ξT (xi xj) ξ xi xj ξ ( xi mx + xj mx ) (96) This inequality gives: ξT (xi xj) 2 mmax (mmax + mx ) , (97) ξT (xi xj) 2 mmax M , where we used ξ 0 ξ mx + mx 0 , ξ mx = P i pixi mx P i pi xi mx mmax, and M = maxi xi . In particular β m T x(xi xj) 2 β mmax mx , (98) β (m x)T (xi xj) 2 β mmax m x 2 β mmax (mmax + mx ) , (99) β x T i (xi xj) 2 β mmax xi 2 β mmax (mmax + mx ) . (100) Let i = arg maxj ξT xj, therefore the maximal softmax component is i. For the maximal softmax component i we have: [softmax(β XT ξ)]i = 1 1 + P j =i exp( β (ξT xi ξT xj)) (101) j =i exp( 2 β mmax (mmax + mx )) = 1 1 + (N 1) exp( 2 β mmax (mmax + mx )) = exp(2 β mmax (mmax + mx )) exp(2 β mmax (mmax + mx )) + (N 1) 1/N exp(2 β mmax (mmax + mx )) . Analogously we obtain for i = arg maxj m T xxj, a bound on the maximal softmax component i if the center is put into the iteration: [softmax(β XT mx)]i 1/N exp(2 β mmax mx ) . (102) Analog we obtain a bound for i = arg maxj(m x)T xj on the maximal softmax component i of the fixed point: [softmax(β XT m x)]i 1/N exp(2 β mmax m x ) (103) 1/N exp(2 β mmax (mmax + mx )) . The two important terms are mmax, the variance or spread of the data and mx , which tells how well the data is centered. For a contraction mapping we already required βm2 max < 1, therefore the first term in the exponent is 2βm2 max < 2. The second term 2βmmax mx is small if the data is centered. Global fixed point near the global mean: Analysis using softmax values. If ξT xi ξT xj for all i and j, then pi 1/N and we have m = maxi pi(1 pi) < 1/N. For M 1/ 2β we obtain from Lemma A2: J 2 < 1 . (104) The local fixed point is m x mx = (1/N) PN i=1 xi with pi 1/N. We now treat this case more formally. First we discuss conditions that ensure that the iteration is a contraction mapping. We consider the iteration Eq. (57) in the variable p: pnew = g(p) = softmax(βXT Xp) . (105) The Jacobian is J(p) = g(p) p = XT X Js (106) Js(pnew) = β diag(pnew) pnew(pnew)T . (107) The version of the mean value theorem in Lemma A32 states for Jm = R 1 0 J(λp) dλ = XT XJm s with the symmetric matrix Jm s = R 1 0 Js(λp) dλ: pnew = g(p) = g(0) + (Jm)T p = g(0) + Jm s XT X p = 1/N 1 + Jm s XT X p . (108) With m = maxi pi(1 pi), Eq. (476) from Lemma A24 is Js(p) 2 = β diag(p) pp T 2 2 m β . (109) First observe that λpi(1 λpi) pi(1 pi) for pi 0.5 and λ [0, 1], since pi(1 pi) λpi(1 λpi) = (1 λ)pi(1 (1 + λ)pi) 0. For maxi pi 0.5 this observation leads to the following bound for Jm s : Jm s 2 2 m β . (110) Eq. (479) in Lemma A24 states that every Js is bounded by 1/2β, therefore also the mean: Jm s 2 0.5 β . (111) Since m = maxi pi(1 pi) < maxi pi = pmax, the previous bounds can be combined as follows: Jm s 2 2 min{0.25, pmax} β . (112) Consequently, Jm 2 N M 2 2 min{0.25, pmax} β , (113) where we used Eq. (170). XT X 2 = XXT 2, therefore XT X 2 is N times the maximal second moment of the data squared. Obviously, g(p) is a contraction mapping in compact sets, where N M 2 2 min{0.25, pmax} β < 1 . (114) S is the sphere around the origin 0 with radius one. For pnew = g(p) = 1/N 1 + Jm p , (115) we have p p 1 = 1 and pnew pnew 1 = 1. Therefore, g maps points from S into S. g is a contraction mapping for Jm 2 N M 2 2 min{0.25, pmax} β = c < 1 . (116) According to Banach fixed point theorem g has a fixed point in the sphere S. Hölder s inequality gives: p 2 = p T p p 1 p = p = pmax . (117) Alternatively: i p2 i = pmax X pi pmax pi pmax X i pi = pmax . (118) Let now S be the sphere around the origin 0 with radius 1/ N + pmax and let Jm(p) 2 c < 1 for p S. The old p is in the sphere S (p S) since pmax < pmax for pmax < 1. We have N + Jm 2 p 1/ N + pmax . (119) Therefore, g is a mapping from S into S and a contraction mapping. According to Banach fixed point theorem, a fixed point exists in S. For the 1-norm, we use Lemma A24 and p 1 = 1 to obtain from Eq. (115): pnew 1/N 1 1 Jm 1 2 β m X M1 , (120) pnew 1/N 1 1 Jm 1 2 β m N M M1 , (121) pnew 1/N 1 1 Jm 1 2 β m N M 2 , (122) where m = maxi pi(1 pi), M1 = X 1 = maxi xi 1, M = maxi xi , X = XT 1 = maxi [XT ]i 1 (maximal absolute row sum norm), and M = maxi xi . Let us quickly mention some auxiliary estimates related to XT X: XT X 1 = max i x T i xj max i j=1 xi xj 1 (123) j=1 M1 = N M M1 , where the first inequaltiy is from Hölder s inequality. We used XT X 1 = max i x T i xj max i j=1 xi xj (124) j=1 M = N M 2 , where the first inequality is from Hölder s inequality (here the same as the Cauchy-Schwarz inequality). See proof of Lemma A24 for the 1-norm bound on Js. Everything else follows from the fact that the 1-norm is sub-multiplicative as induced matrix norm. We consider the minimal p . min p p 2 (125) The solution to this minimization problem is p = (1/N)1. Therefore, we have 1/ N p and 1/N p 2 Using Eq. (119) we obtain N + pmax . (126) Moreover pnew 2 = (pnew)T pnew = 1/N + (pnew)T Jm p 1/N + Jm 2 p (127) 1/N + Jm 2 , since pnew S and p S. For the fixed point, we have p 2 = (p )T p = 1/N + (p )T Jm p 1/N + Jm 2 p 2 , (128) and hence 1/N p 2 1/N 1 1 Jm 2 = 1/N (1 + Jm 2 1 Jm 2 ) . (129) Therefore, for small Jm 2 we have p (1/N)1. A.1.5.3 Many Stable States: Fixed Points Near Stored Patterns. We move on to the next case, where the patterns xi are well separated. In this case the iteration goes to the pattern to which the initial ξ is most similar. If the initial ξ is similar to a vector xi then it will converge to xi and p will be ei. The main ingredients are again Banach s Theorem and estimates on the Jacobian norm. Proof of a fixed point by Banach Fixed Point Theorem. Mapped Vectors Stay in a Compact Environment. We show that if xi is sufficient dissimilar to other xj then there is an compact environment of xi (a sphere) where the fixed point iteration maps this environment into itself. The idea of the proof is to define a sphere around xi for which points from the sphere are mapped by f into the sphere. We first need following lemma which bounds the distance xi f(ξ) , where xi is the pattern that is least separated from ξ but separated from other patterns. Lemma A4. For a query ξ and data X = (x1, . . . , x N), there exists a xi that is least separated from ξ while being separated from other xj with j = i: i = arg max k min j,j =k ξT xk ξT xj = arg max k ξT xk max j,j =k ξT xj 0 c = max k min j,j =k ξT xk ξT xj = max k ξT xk max j,j =k ξT xj For xi, the following holds: xi f(ξ) 2 ϵ M , (132) where M = max i xi , (133) ϵ = (N 1) exp( β c) . (134) Proof. For the softmax component i we have: [softmax(β XT ξ)]i = 1 1 + P j =i exp(β (ξT xj ξT xi)) 1 1 + P j =i exp( β c) (135) = 1 1 + (N 1) exp( β c) = 1 (N 1) exp( β c) 1 + (N 1) exp( β c) 1 (N 1) exp( β c) = 1 ϵ For softmax components k = i we have [softmax(βXT ξ)]k = exp(β (ξT xk ξT xi)) 1 + P j =i exp(β (ξT xj ξT xi)) exp( β c) = ϵ N 1 . The iteration f can be written as f(ξ) = Xsoftmax(βXT ξ) = j=1 xj [softmax(βXT ξ)]j . (137) We now can bound xi f(ξ) : j=1 [softmax(βXT ξ)]j xj (1 [softmax(βXT ξ)]i) xi j=1,j =i [softmax(βXT ξ)]j xj ϵ xi + ϵ N 1 j=1,j =i xj ϵ M + ϵ N 1 j=1,j =i M = 2 ϵ M . We define i, i.e. the separation of pattern xi from data X = (x1, . . . , x N) as: i = min j,j =i x T i xi x T i xj = x T i xi max j,j =i x T i xj . (139) The pattern is separated from the other data if 0 < i. Using the parallelogram identity, i can also be expressed as i = min j,j =i 1 2 xi 2 xj 2 + xi xj 2 (140) 2 max j,j =i xj 2 xi xj 2 . For xi = xj we have i = 1/2 minj,j =i xi xj 2. Next we define the sphere where we want to apply Banach fixed point theorem. Definition 3 (Sphere Si). The sphere Si is defined as Si := ξ | ξ xi 1 β N M Lemma A5. With ξ given, if the assumptions A1: ξ is inside sphere: ξ Si, A2: data point xi is well separated from the other data: i 2 β N + 1 β ln 2 (N 1) N β M 2 (142) hold, then f(ξ) is inside the sphere: f(ξ) Si. Therefore, with assumption (A2), f is a mapping from Si into Si. Proof. We need the separation i of ξ from the data. i = min j,j =i ξT xi ξT xj . (143) Using the Cauchy-Schwarz inequality, we obtain for 1 j N: ξT xj x T i xj ξ xi xj ξ xi M . (144) We have the lower bound i min j,j =i x T i xi ξ xi M x T i xj + ξ xi M (145) = 2 ξ xi M + min j,j =i x T i xi x T i xj = i 2 ξ xi M where we used the assumption (A1) of the lemma. From the proof in Lemma A4 we have pmax = [softmax(βXT ξ)]i 1 (N 1) exp( β i) = 1 ϵ . (146) Lemma A4 states that xi f(ξ) 2 ϵ M = 2 (N 1) exp( β i) M (147) 2 (N 1) exp( β ( i 2 β N )) M . xi f(ξ) (148) 2 (N 1) exp( β ( 2 β ln 2 (N 1) N β M 2 2 β N )) M = 2 (N 1) exp( ln 2 (N 1) N β M 2 ) M = 1 N β M , where we used assumption (A2) of the lemma. Therefore, f(ξ) is a mapping from the sphere Si into the sphere Si: If ξ Si then f(ξ) Si. Contraction mapping. For applying Banach fixed point theorem we need to show that f is contraction in the compact environment Si. Lemma A6. Assume that i 2 β N + 1 β ln 2 (N 1) N β M 2 , (149) then f is a contraction mapping in Si. Proof. The version of the mean value theorem Lemma A32 states for Jm = R 1 0 J(λξ+(1 λ)xi) dλ: f(ξ) = f(xi) + Jm (ξ xi) . (150) f(ξ) f(xi) Jm 2 ξ xi . (151) We define ξ = λξ + (1 λ)xi for some λ [0, 1]. From the proof in Lemma A4 we have pmax( ξ) = [softmax(β XT ξ)]i 1 (N 1) exp( β i) = 1 ϵ , (152) ϵ = (N 1) exp( β i) , (153) i = min j,j =i ξT xi ξT xj . (154) First we compute an upper bound on ϵ. We need the separation i of ξ from the data. Using the Cauchy-Schwarz inequality, we obtain for 1 j N: ξT xj x T i xj ξ xi xj ξ xi M . (155) We have the lower bound on i: i min j,j =i x T i xi ξ xi M x T i xj + ξ xi M (156) = 2 ξ xi M + min j,j =i x T i xi x T i xj = i 2 ξ xi M i 2 ξ xi M , where we used ξ xi = λ ξ xi ξ xi . From the definition of ϵ in Eq. (152) we have ϵ = (N 1) exp( β i) (157) (N 1) exp ( β ( i 2 ξ xi M)) (N 1) exp β i 2 β N where we used ξ Si, therefore ξ xi 1 β N M . Next we compute an lower bound on ϵ. We start with an upper on i: i min j,j =i x T i xi + ξ xi M x T i xj ξ xi M (158) = 2 ξ xi M + min j,j =i x T i xi x T i xj = i + 2 ξ xi M i + 2 ξ xi M , where we used ξ xi = λ ξ xi ξ xi . From the definition of ϵ in Eq. (152) we have ϵ = (N 1) exp( β i) (159) (N 1) exp ( β ( i + 2 ξ xi M)) (N 1) exp β i + 2 β N where we used ξ Si, therefore ξ xi 1 β N M . Now we bound the Jacobian. We can assume ϵ 0.5 otherwise (1 ϵ) 0.5 in the following. From the proof of Lemma A24 we know for pmax( ξ) 1 ϵ, then pi( ξ) ϵ for pi( ξ) = pmax( ξ). Therefore, pi( ξ)(1 pi( ξ)) m ϵ(1 ϵ) for all i. Next we use the derived upper and lower bound on ϵ in previous Eq. (61) in Lemma A2: J( ξ) 2 2 β N M 2 ϵ 2 ϵ2 β N M 2 (160) 2 β N M 2 (N 1) exp β i 2 β N 2 (N 1)2 exp 2 β i + 2 β N The bound Eq. (160) holds for the mean Jm, too, since it averages over J( ξ): Jm 2 2 β N M 2 (N 1) exp β i 2 β N 2 (N 1)2 exp 2 β i + 2 β N The assumption of the lemma is i 2 β N + 1 β ln 2 (N 1) N β M 2 , (162) β ln 2 (N 1) N β M 2 , (163) Therefore, the spectral norm J 2 can be bounded by: Jm 2 2 β (N 1) exp β 1 β ln 2 (N 1) N β M 2 N M 2 (164) 2 (N 1)2 exp 2 β i + 2 β N = 2 β (N 1) 1 2 (N 1) N β M 2 N M 2 2 (N 1)2 exp 2 β i + 2 β N = 1 2 (N 1)2 exp 2 β i + 2 β N β N M 2 < 1 . Therefore, f is a contraction mapping in Si. Banach Fixed Point Theorem. Now we have all ingredients to apply Banach fixed point theorem. Lemma A7. Assume that i 2 β N + 1 β ln 2 (N 1) N β M 2 , (165) then f has a fixed point in Si. Proof. We use Banach fixed point theorem: Lemma A5 says that f maps from Si into Si. Lemma A6 says that f is a contraction mapping in Si. Contraction mapping with a fixed point. We have shown that a fixed point exists. We want to know how fast the iteration converges to the fixed point. Let x i be the fixed point of the iteration f in the sphere Si. Using the mean value theorem Lemma A32, we have with Jm = R 1 0 J(λξ + (1 λ)x i ) dλ: f(ξ) x i = f(ξ) f(x i ) Jm 2 ξ x i (166) According to Lemma A24, if pmax = maxi pi 1 ϵ for all x = λξ + (1 λ)x i , then the spectral norm of the Jacobian is bounded by Js( x) 2 < 2 ϵ β . (167) The norm of Jacobian at x is bounded J( x) 2 2 β X 2 2 ϵ 2 β NM 2 ϵ . (168) We used that the spectral norm . 2 is bounded by the Frobenius norm . F which can be expressed by the norm squared of its column vectors: X 2 X F = s X i xi 2 . (169) Therefore X 2 2 N M 2 . (170) The norm of Jacobian of the fixed point iteration is bounded Jm 2 2 β X 2 2 ϵ 2 β NM 2 ϵ . (171) The separation of pattern xi from data X = (x1, . . . , x N) is i = min j,j =i x T i xi x T i xj = x T i xi max j,j =i x T i xj . (172) We need the separation i of x = λξ + (1 λ)x i from the data: i = min j,j =i x T xi x T xj . (173) We compute a lower bound on i. Using the Cauchy-Schwarz inequality, we obtain for 1 j N: x T xj x T i xj x xi xj x xi M . (174) We have the lower bound i min j,j =i x T i xi x xi M x T i xj + x xi M (175) = 2 x xi M + min j,j =i x T i xi x T i xj = i 2 x xi M . Since x xi = λξ + (1 λ)x i xi (176) λ ξ xi + (1 λ) x i xi max{ ξ xi , x i xi } , we have i i 2 max{ ξ xi , x i xi } M . (177) For the softmax component i we have: [softmax(β XT ξ)]i = 1 1 + P j =i exp(β ( ξT xj ξT xi)) (178) j =i exp( β ( i 2 max{ ξ xi , x i xi } M)) = 1 1 + (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) = 1 (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) 1 + (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) 1 (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) = 1 ϵ . ϵ = (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) . (179) We can bound the spectral norm of the Jacobian, which upper bounds the Lipschitz constant: Jm 2 2 β N M 2 (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) . (180) For a contraction mapping we require Jm 2 < 1 , (181) which can be ensured by 2 β NM 2 (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) < 1 . (182) Solving this inequality for i gives i > 2 max{ ξ xi , x i xi } M + 1 β ln 2 (N 1) N β M 2 . (183) In an environment around x i in which Eq. (183) holds, f is a contraction mapping and every point converges under the iteration f to x i when the iteration stays in the environment. After every iteration the mapped point f(ξ) is closer to the fixed point x i than the original point xi: f(ξ) x i Jm 2 ξ x i < ξ x i . (184) f(ξ) x i Jm 2 ξ x i Jm 2 ξ f(ξ) + Jm 2 f(ξ) x i , (185) f(ξ) x i Jm 2 1 Jm 2 ξ f(ξ) . (186) For large i the iteration is close to the fixed point even after one update. This has been confirmed in several experiments. A.1.5.4 Metastable States: Fixed Points Near Mean of Similar Patterns. The proof concept is the same as for a single pattern but now for the arithmetic mean of similar patterns. Bound on the Jacobian. The Jacobian of the fixed point iteration is J = β X diag(p) pp T XT = XJs XT . (187) If we consider pi as the probability of selecting the vector xi, then we can define expectations as Ep[f(x)] = PN i=1 pif(xi). In this setting the matrix X diag(p) pp T XT (188) is the covariance matrix of data X when its vectors are selected according to the probability p: X diag(p) pp T XT = Xdiag(p)XT Xpp T XT (189) i=1 pi xi x T i = Ep[x x T ] Ep[x] Ep[x]T = Varp[x] , (191) therefore we have J = β Varp[x] . (192) We now elaborate more on this interpretation as variance. Specifically the singular values of J (or in other words: the covariance) should be reasonably small. The singular values are the key to ensure convergence of the iteration Eq. (57). Next we present some thoughts. 1. It s clear that the largest eigenvalue of the covariance matrix (equal to the largest singular value) is the variance in the direction of the eigenvector associated with the largest eigenvalue. 2. Furthermore the variance goes to zero as one pi goes to one, since only one pattern is chosen and there is no variance. 3. The variance is reasonable small if all patterns are chosen with equal probability. 4. The variance is small if few similar patterns are chosen with high probability. If the patterns are sufficient similar, then the spectral norm of the covariance matrix is smaller than one. The first three issues have already been adressed. Now we focus on the last one in greater detail. We assume that the first l patterns are much more probable (and similar to one another) than the other patterns. Therefore, we define: M := max i xi , (193) i=l+1 pi ϵ , (194) i=1 pi 1 ϵ , (195) pi := pi 1 γ pi/(1 ϵ) , (196) i=1 pi = 1 , (197) i=1 xi , (198) mmax = max 1 i l xi mx . (199) M is an upper bound on the Euclidean norm of the patterns, which are vectors. ϵ is an upper bound on the probability γ of not choosing one of the first l patterns, while 1 ϵ is a lower bound the probability (1 γ) of choosing one of the first l patterns. mx is the arithmetic mean (the center) of the first l patterns. mmax is the maximal distance of the patterns to the center mx . p is the probability p normalized for the first l patterns. The variance of the first l patterns is Var p[x1:l] = i=1 pi xi x T i Lemma A8. With the definitions in Eq. (193) to Eq. (200), the following bounds on the norm J 2 of the Jacobian of the fixed point iteration hold. The γ-bound for J 2 is J 2 β (1 γ) m2 max + γ 2 (2 γ) M 2 (201) and the ϵ-bound for J 2 is: J 2 β m2 max + ϵ 2 (2 ϵ) M 2 . (202) Proof. The variance Var p[x1:l] can be expressed as: (1 γ) Var p[x1:l] = i=1 pi xi x T i + Pl i=1 pi (1 γ)2 i=1 pi xi x T i 1 1 γ i=1 pi xi x T i i=1 pi xi x T i Therefore, we have i=1 pi xi x T i = (1 γ) Var p[x1:l] + γ 1 γ We now can reformulate the Jacobian J: i=1 pi xi x T i + i=l+1 pi xi x T i (205) i=1 pi xi + i=l+1 pi xi i=1 pi xi + i=l+1 pi xi i=1 pi xi x T i i=l+1 pi xi x T i i=l+1 pi xi i=l+1 pi xi i=l+1 pi xi i=l+1 pi xi (1 γ) Var p[x1:l] + γ 1 γ i=l+1 pi xi x T i i=l+1 pi xi i=l+1 pi xi i=l+1 pi xi i=l+1 pi xi The spectral norm of an outer product of two vectors is the product of the Euclidean norms of the vectors: ab T 2 = q λmax(ba T ab T ) = a q λmax(bb T ) = a b , (206) since bb T has eigenvector b/ b with eigenvalue b 2 and otherwise zero eigenvalues. We now bound the norms of some matrices and vectors: i=1 pi xi (1 γ) M , (207) i=l+1 pi xi i=l+1 pi xi γ M , (208) i=l+1 pi xi x T i i=l+1 pi xi x T i 2 = i=l+1 pi xi 2 i=l+1 pi M 2 = γ M 2 . (209) In order to bound the variance of the first l patterns, we compute the vector a that minimizes i=1 pi xi a 2 = i=1 pi(xi a)T (xi a) . (210) The solution to i=1 pi(a xi) = 0 (211) i=1 pixi . (212) The Hessian of f is positive definite since i=1 pi I = 2 I (213) and f is a convex function. Hence, the mean i=1 pi xi (214) minimizes PN i=1 pi xi a 2. Therefore, we have i=1 pi xi x 2 i=1 pi xi mx 2 (1 γ) m2 max . (215) We now bound the variance on the first l patterns: (1 γ) Var p[x1:l] 2 i=1 pi (xi x) (xi x)T 2 (216) i=1 pi xi x 2 i=1 pi xi mx 2 (1 γ) m2 max . We obtain for the spectral norm of J: J 2 β (1 γ) Var p[x1:l] 2 (217) i=l+1 pi xi x T i i=l+1 pi xi i=l+1 pi xi i=l+1 pi xi i=l+1 pi xi β (1 γ) Var p[x1:l] 2 + γ (1 γ) M 2 + γ M 2 + γ2 M 2 + γ (1 γ) M 2 + γ (1 γ) M 2 = β (1 γ) Var p[x1:l] 2 + γ 2 (2 γ) M 2 . Combining the previous two estimates immediately leads to Eq. (201). The function h(x) = x2(2 x) has the derivative h (x) = 4(1 x). Therefore, h(x) is monotone increasing for x < 1. For 0 γ ϵ < 1, we can immediately deduce that γ2(2 γ) ϵ2(2 ϵ). Since ϵ is larger than γ, we obtain the following ϵ-bound for J 2: J 2 β m2 max + ϵ 2 (2 ϵ) M 2 . (218) We revisit the bound on (1 γ) Var p[x1:l]. The trace Pd k=1 ek is the sum of the eigenvalues ek. The spectral norm is equal to the largest eigenvalue e1, that is, the largest singular value. We obtain: Var p[x1:l] 2 = Tr i=1 pi (xi x) (xi x)T ! k=2 ek (219) i=1 pi Tr (xi x) (xi x)T i=1 pi xi x 2 Therefore, the tightness of the bound depends on eigenvalues which are not the largest. That is variations which are not along the strongest variation weaken the bound. Proof of a fixed point by Banach Fixed Point Theorem. Without restricting the generality, we assume that the first l patterns are much more probable (and similar to one another) than the other patterns. Therefore, we define: M := max i xi , (220) i=l+1 pi ϵ , (221) i=1 pi 1 ϵ , (222) pi := pi 1 γ pi/(1 ϵ) , (223) i=1 pi = 1 , (224) i=1 xi , (225) mmax = max 1 i l xi mx . (226) M is an upper bound on the Euclidean norm of the patterns, which are vectors. ϵ is an upper bound on the probability γ of not choosing one of the first l patterns, while 1 ϵ is a lower bound the probability (1 γ) of choosing one of the first l patterns. mx is the arithmetic mean (the center) of the first l patterns. mmax is the maximal distance of the patterns to the center mx . p is the probability p normalized for the first l patterns. Mapped vectors stay in a compact environment. We show that if mx is sufficient dissimilar to other xj with l < j then there is an compact environment of mx (a sphere) where the fixed point iteration maps this environment into itself. The idea of the proof is to define a sphere around mx for which the points from the sphere are mapped by f into the sphere. We first need following lemma which bounds the distance mx f(ξ) of a ξ which is close to mx. Lemma A9. For a query ξ and data X = (x1, . . . , x N), we define 0 c = min j,l 2 β N + 1 β ln 2 (N 1) N β M 2 , (301) the inequality follows from following master inequality i 2 β N + 1 β ln 2 N 2 β M 2 , (302) If we assume that Si Sj = with i = j, then the triangle inequality with a point from the intersection gives xi xj 2 β N M . (303) Therefore, we have using the Cauchy-Schwarz inequality: i x T i (xi xj) xi xi xj M 2 β N M = 2 β N . (304) The last inequality is a contraction to Eq. (302) if we assume that 1 < 2 (N 1) N β M 2 . (305) With this assumption, the spheres Si and Sj do not intersect. Therefore, each xi has its separate fixed point in Si. We define min = min 1 i N i (306) to obtain the master inequality min 2 β N + 1 β ln 2 N 2 β M 2 . (307) Patterns on a sphere. For simplicity and in accordance with the results of the classical Hopfield network, we assume all patterns being on a sphere with radius M: i : xi = M . (308) Under assumption Eq. (305) we have only to show that the master inequality Eq. (307) is fulfilled for each xi to have a separate fixed point near each xi. We defined αij as the angle between xi and xj. The minimal angle αmin between two data points is αmin = min 1 i 0 such that κd 1 2 δ(d 1) 1. Then Pr(N 2 d 1 αmin δ) 1 κd 1 2 δd 1 . (324) Proof. The statement of the lemma is Eq. (3-6) from Proposition 3.5 in Brauchart et al. (2018). Next we derive upper and lower bounds on the constant κd since we require them later for proving storage capacity bounds. Lemma A14. For κd defined in Eq. (323) we have the following bounds for every d 1: e π d κd exp(1/12) 2 π d < 1 . (325) Proof. We use for x > 0 the following bound related to Stirling s approximation formula for the gamma function, c.f. (Olver et al., 2010, (5.6.1)): 1 < Γ(x) (2 π) 1 2 x 1 2 x exp(x) < exp 1 Using Stirling s formula Eq. (326), we upper bound κd: κd = 1 d π Γ((d + 1)/2) Γ(d/2) < 1 d π exp 1 6(d+1) exp d+1 = 1 d π e exp 1 6(d + 1) For the first inequality, we applied Eq. (326), while for the second we used (1 + 1 d)d < e for d 1. Next, we lower bound κd by again applying Stirling s formula Eq. (326): κd = 1 d π Γ((d + 1)/2) Γ(d/2) > 1 d π exp d+1 = 1 d π e exp 1 d 2 1 exp 1 where the last inequality holds because of monotonicity of (1 + 1 d)d and using the fact that for d = 1 it takes on the value 2. We require a bound on cos to bound the master inequality Eq. (311). Lemma A15. For 0 x π the function cos can be upper bounded by: cos(x) 1 x2 Proof. We use the infinite product representation of cos, c.f. (Olver et al., 2010, (4.22.2)): Since it holds that (2n 1)2 π2 1 (331) for |x| π and n 2, we can get the following upper bound on Eq. (330): 9 π2 + 16 x4 9 π4 1 40 x2 9 π2 + 16 x2 The last but one inequality uses x π, which implies x/π 1. Thus Eq. (329) is proven. Exponential storage capacity: the base c as a function of the parameter β, the radius of the sphere M, the probability p, and the dimension d of the space. We express the number N of stored patterns by an exponential function with base c > 1 and an exponent linear in d. We derive constraints on he base c as a function of β, the radius of the sphere M, the probability p that all patterns can be stored, and the dimension d of the space. With β > 0, K > 0, and d 2 (to ensure a sphere), the following theorem gives our main result. Theorem A5 (Storage Capacity (Main): Random Patterns). We assume a failure probability 0 < p 1 and randomly chosen patterns on the sphere with radius M := K d 1. We define a := 2 d 1 (1 + ln(2 β K2 p (d 1))) , b := 2 K2 β c := b W0(exp(a + ln(b)) , (333) where W0 is the upper branch of the Lambert W function (Olver et al., 2010, (4.13)) and ensure 4 d 1 . (334) Then with probability 1 p, the number of random patterns that can be stored is Therefore it is proven for c 3.1546 with β = 1, K = 3, d = 20 and p = 0.001 (a + ln(b) > 1.27) and proven for c 1.3718 with β = 1, K = 1, d = 75, and p = 0.001 (a + ln(b) < 0.94). Proof. We consider the probability that the master inequality Eq. (311) is fulfilled: Pr M 2(1 cos(αmin))) 2 β N + 1 β ln 2 N 2 β M 2 1 p . (336) Using Eq. (329), we have: 1 cos(αmin) 1 5 α2 min . (337) Therefore, with probability 1 p the storage capacity is largest N that fulfills Pr M 2 α2 min 5 2 β N + 1 β ln 2 N 2 β M 2 1 p . (338) This inequality is equivalent to N 2 d 1 αmin β ln 2 N 2 β M 2 1 1 p . (339) We use Eq. (324) to obtain: N 2 d 1 αmin β ln 2 N 2 β M 2 1 2 N 2 M (d 1) 2 β ln 2 N 2 β M 2 d 1 For Eq. (339) to be fulfilled, it is sufficient that 2 N 2 M (d 1) 2 β ln 2 N 2 βM 2 d 1 2 p 0 . (341) If we insert the assumption Eq. (334) of the theorem into Eq. (335), then we obtain N 2. We now apply the upper bound κd 1/2 < κd 1 < 1 from Eq. (325) and the upper bound 2 βN 1 N 2 to inequality Eq. (341). In the resulting inequality we insert N = pc d 1 4 to check whether it is fulfilled with this special value of N and obtain: 2 M (d 1) 1 β ln 2 p c d 1 2 p . (342) Dividing by p, inserting M = K d 1, and exponentiation of the left and right side by 2 d 1 gives: 5 c K2 (d 1) β ln 2 β c d 1 2 p K2 (d 1) 1 0 . (343) After some algebraic manipulation, this inequality can be written as a c + c ln(c) b 0 , (344) where we used a := 2 d 1 (1 + ln(2 β K2 p (d 1))) , b := 2 K2 β We determine the value ˆc of c which makes the inequality Eq. (344) equal to zero. We solve a ˆc + ˆc ln(ˆc) b = 0 (345) a ˆc + ˆc ln(ˆc) b = 0 (346) a + ln(ˆc) = b/ˆc a + ln(b) + ln(ˆc/b) = b/ˆc b/ˆc + ln(b/ˆc) = a + ln(b) b/ˆc exp(b/ˆc) = exp(a + ln(b)) b/ˆc = W0(exp(a + ln(b))) ˆc = b W0(exp(a + ln(b)) , where W0 is the upper branch of the Lambert W function (see Def. A6). Hence, the solution is ˆc = b W0(exp(a + ln(b)) . (347) The solution exist, since the Lambert function W0(x) (Olver et al., 2010, (4.13)) is defined for 1/e < x and we have 0 < exp(a + ln(b). Since ˆc fulfills inequality Eq. (344) and therefore also Eq. (342), we have a lower bound on the storage capacity N: Next we aim at a lower bound on c which does not use the Lambert W function (Olver et al., 2010, (4.13)). Therefore, we upper bound W0(exp(a + ln(b)) to obtain a lower bound on c, therefore, also a lower bound on the storage capacity N. The lower bound is given in the next corollary. Corollary A1. We assume a failure probability 0 < p 1 and randomly chosen patterns on the sphere with radius M = K d 1. We define a := 2 d 1 (1 + ln(2 β K2 p (d 1))) , b := 2 K2 β Using the omega constant Ω 0.56714329 we set b ln Ωexp(a + ln(b)) + 1 Ω(1 + Ω) 1 for a + ln(b) 0 , b (a + ln(b)) a + ln(b) a + ln(b) + 1 for a + ln(b) > 0 (349) 4 d 1 . (350) Then with probability 1 p, the number of random patterns that can be stored is Examples are c 3.1444 for β = 1, K = 3, d = 20 and p = 0.001 (a + ln(b) > 1.27) and c 1.2585 for β = 1 K = 1, d = 75, and p = 0.001 (a + ln(b) < 0.94). Proof. We lower bound the c defined in Theorem A5. According to (Hoorfar & Hassani, 2008, Theorem 2.3) we have for any real u and y > 1 W0(exp(u)) ln exp(u) + y To upper bound W0(x) for x [0, 1], we set y = 1/W0(1) = 1/Ω= exp Ω= 1/ ln Ω 1.76322 , (353) where the Omega constant Ωis (et t)2 + π2 1 0.56714329 . (354) See for these equations the special values of the Lambert W function in Lemma A31. We have the upper bound on W0: W0(exp(u)) ln exp(u) + 1/Ω 1 + ln(1/Ω) = ln Ωexp(u) + 1 At the right hand side of interval [0, 1], we have u = 0 and exp(u) = 1 and get: = ln (Ω) = Ω= W0(1) . (356) Therefore, the bound is tight at the right hand side of of interval [0, 1], that is for exp(u) = 1, i.e. u = 0. We have derived an bound for W0(exp(u)) with exp(u) [0, 1] or, equivalently, u [ , 0]. We obtain from Hoorfar & Hassani (2008, Corollary 2.6) the following bound on W0(exp(u)) for 1 < exp(u), or, equivalently 0 < u: W0(exp(u)) u u 1 + u . (357) A lower bound on ˆc is obtained via the upper bounds Eq. (357) and Eq. (355) on W0 as W0 > 0. We set u = a + ln(b) and obtain W0(exp(a + ln(b))) ln Ωexp(a + ln(b)) + 1 Ω(1 + Ω) 1 for a + ln(b) 0 , (a + ln(b)) a + ln(b) a + ln(b) + 1 for a + ln(b) > 0 (358) We insert this bound into Eq. (347), the solution for ˆc, to obtain the statement of the theorem. Exponential storage capacity: the dimension d of the space as a function of the parameter β, the radius of the sphere M, and the probability p. We express the number N of stored patterns by an exponential function with base c > 1 and an exponent linear in d. We derive constraints on the dimension d of the space as a function of β, the radius of the sphere M, the probability p that all patterns can be stored, and the base of the exponential storage capacity. The following theorem gives this result. Theorem A6 (Storage Capacity (d computed): Random Patterns). We assume a failure probability 0 < p 1 and randomly chosen patterns on the sphere with radius M = K d 1. We define 5 c , b := 1 + ln 2 p β K2 , d = 1 + 1 a W(a exp( b)) for a = 0 , 1 + exp( b) for a = 0 , (359) where W is the Lambert W function (Olver et al., 2010, (4.13)). For 0 < a the function W is the upper branch W0 and for a < 0 we use the lower branch W 1. If we ensure that e a exp( b) , (360) then with probability 1 p, the number of random patterns that can be stored is Proof. We consider the probability that the master inequality Eq. (311) is fulfilled: Pr M 2(1 cos(αmin))) 2 β N + 1 β ln 2 N 2 β M 2 1 p . (362) Using Eq. (329), we have: 1 cos(αmin) 1 5 α2 min . (363) Therefore, with probability 1 p the storage capacity is largest N that fulfills Pr M 2 α2 min 5 2 β N + 1 β ln 2 N 2 β M 2 1 p . (364) This inequality is equivalent to N 2 d 1 αmin β ln 2 N 2 β M 2 1 1 p . (365) We use Eq. (324) to obtain: N 2 d 1 αmin β ln 2 N 2 β M 2 1 2 N 2 M (d 1) 2 β ln 2 N 2 β M 2 d 1 For Eq. (365) to be fulfilled, it is sufficient that 2 N 2 M (d 1) 2 β ln 2 N 2 βM 2 d 1 2 p 0 . (367) If we insert the assumption Eq. (360) of the theorem into Eq. (361), then we obtain N 2. We now apply the upper bound κd 1/2 < κd 1 < 1 from Eq. (325) and the upper bound 2 βN 1 N 2 to inequality Eq. (367). In the resulting inequality we insert N = pc d 1 4 to check whether it is fulfilled with this special value of N and obtain: 2 M (d 1) 1 β ln 2 p c d 1 2 p . (368) Dividing by p, inserting M = K d 1, and exponentiation of the left and right side by 2 d 1 gives: 5 c K2 (d 1) β ln 2 β c d 1 2 p K2 (d 1) 1 0 . (369) This inequality Eq. (369) can be reformulated as: 1 + ln 2 p β c d 1 2 K2 (d 1) (d 1) K2 β 5 c 0 . (370) 5 c , b := 1 + ln 2 p β K2 , we write inequality Eq. (370) as ln(d 1) + a (d 1) + b 0 . (372) We determine the value ˆd of d which makes the inequality Eq. (372) equal to zero. We solve ln( ˆd 1) + a ( ˆd 1) + b = 0 . (373) For a = 0 we have ln( ˆd 1) + a ( ˆd 1) + b = 0 (374) a ( ˆd 1) + ln( ˆd 1) = b ( ˆd 1) exp(a ( ˆd 1)) = exp( b) a ( ˆd 1) exp(a ( ˆd 1)) = a exp( b) a ( ˆd 1) = W(a exp( b)) a W(a exp( b)) a W(a exp( b)) , where W is the Lambert W function (see Def. A6). For a > 0 we have to use the upper branch W0 of the Lambert W function and for a < 0 we use the lower branch W 1 of the Lambert W function (Olver et al., 2010, (4.13)). We have to ensure that 1/e a exp( b) for a solution to exist. For a = 0 we have ˆd = 1 + exp( b). Hence, the solution is a W(a exp( b)) . (375) Since ˆd fulfills inequality Eq. (369) and therefore also Eq. (368), we have a lower bound on the storage capacity N: Corollary A2. We assume a failure probability 0 < p 1 and randomly chosen patterns on the sphere with radius M = K d 1. We define 5 c , b := 1 + ln 2 p β K2 , a ( ln( a) + b) , (377) e a exp( b) , a < 0 , (378) then with probability 1 p, the number of random patterns that can be stored is Setting β = 1, K = 3, c = 2 and p = 0.001 yields d < 24. Proof. For a < 0 the Eq. (359) from Theorem (A6) can be written as d = 1 + W 1(a exp( b)) a = 1 + W 1( exp ( ( ln( a) + b 1) 1)) From Alzahrani & Salem (2018, Theorem 3.1) we get the following bound on W 1: e e 1 (u + 1) < W 1( exp( u 1)) < (u + 1) . (381) for u > 0. We apply Eq. (381) to Eq. (380) with u = ln( a) + b 1. Since a < 0 we get d > 1 + ln( a) + b Storage capacity for the expected minimal separation instead of the probability that all patterns can be stored. In contrast to the previous paragraph, we want to argue about the storage capacity for the expected minimal separation. Therefore, we will use the following bound on the expectation of αmin (minimal angle), which gives also a bound on the expected of min (minimal separation): Lemma A16 (Proposition 3.6 in Brauchart et al. (2018)). We have the following lower bound on the expectation of αmin: E h N 2 d 1 αmin i 2(d 1) π Γ( d 1 ! 1 d 1 Γ(1 + 1 d 1) d 1 d 1 Γ(2 + 1 d 1) := Cd 1. (383) The bound is valid for all N 2 and d 2. Let us start with some preliminary estimates. First of all we need some asymptotics for the constant Cd 1 in Eq. (383): Lemma A17. The following estimate holds for d 2: Cd 1 ln(d + 1) Proof. The recursion formula for the Gamma function is (Olver et al., 2010, (5.5.1)): Γ(x + 1) = x Γ(x) . (385) We use Eq. (325) and the fact that d 1 d 1 for d 1 to obtain: d) 1 d Γ(1 + 1 d) (d + 1) 1 d) 1 d (d + 1) 1 d > (d + 1) 1 d (386) d ln(d + 1)) 1 1 d ln(d + 1) , where in the last step we used the elementary inequality exp(x) 1 + x, which follows from the mean value theorem. The next theorem states the number of stored patterns for the expected minimal separation. Theorem A7 (Storage Capacity (expected separation): Random Patterns). We assume patterns on the sphere with radius M = K d 1 that are randomly chosen. Then for all values c 1 for which 1 5 (d 1) K2 c 1(1 ln(d 1) β ln 2 c d 1 2 β (d 1) K2 (387) holds, the number of stored patterns for the expected minimal separation is at least The inequality Eq. (387) is e.g. fulfilled with β = 1, K = 3, c = 2 and d 17. Proof. Instead of considering the probability that the master inequality Eq. (311) is fulfilled we now consider whether this inequality is fulfilled for the expected minimal distance. We consider the expectation of the minimal distance min: E[ min] = E[M 2(1 cos(αmin)))] = M 2(1 E[cos(αmin))]) . (389) For this expectation, the master inequality Eq. (311) becomes M 2(1 E[cos(αmin))]) 2 β N + 1 β ln 2 N 2 β M 2 . (390) We want to find the largest N that fulfills this inequality. We apply Eq. (329) and Jensen s inequality to deduce the following lower bound: 1 E[cos(αmin)] 1 5 E α2 min 1 5 E[αmin]2 . (391) Now we use Eq. (383) and Eq. (384) to arrive at E[αmin]2 N 4 d 1 E[N 2 d 1 αmin]2 N 4 d 1 C2 d 1 N 4 d 1 (1 ln(d 1) (d 1) )2 , (392) for sufficiently large d. Thus in order to fulfill Eq. (390), it is enough to find values that satisfy Eq. (387). A.1.6.2 Retrieval of Patterns with One Update and Small Retrieval Error. Retrieval of a pattern xi for fixed point x i and query ξ is defined via an ϵ by f(ξ) x i < ϵ, that is, the update is ϵ-close to the fixed point. The update rule retrieves a pattern with one update for well separated patterns, that is, i is large. Theorem A8 (Pattern Retrieval with One Update). With query ξ, after one update the distance of the new point f(ξ) to the fixed point x i is exponentially small in the separation i. The precise bounds using the Jacobian J = f(ξ) ξ and its value Jm in the mean value theorem are: f(ξ) x i Jm 2 ξ x i , (393) Jm 2 2 β N M 2 (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) . (394) For given ϵ and sufficient large i, we have f(ξ) x i < ϵ, that is, retrieval with one update. Proof. From Eq. (180) we have Jm 2 2 β N M 2 (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) . (395) After every iteration the mapped point f(ξ) is closer to the fixed point x i than the original point xi: f(ξ) x i Jm 2 ξ x i . (396) For given ϵ and sufficient large i, we have f(ξ) x i < ϵ, since Jm 2 foes exponentially fast to zero with increasing i. We want to estimate how large i is. For xi we have: i = min j,j =i x T i xi x T i xj = x T i xi max j,j =i x T i xj . (397) To estimate how large i is, assume vectors x Rd and y Rd that have as components standard normally distributed values. The expected value of the separation of two points with normally distributed components is E x T x x T y = j=1 E x2 j + j=1 E [yj] = d . (398) The variance of the separation of two points with normally distributed components is Var x T x x T y = E h x T x x T y 2i d2 (399) j=1 E x4 j + j=1,k=1,k =j E x2 j E x2 k 2 j=1 E x3 j E [yj] j=1,k=1,k =j E x2 j E [xk] E [yk] + j=1 E x2 j E y2 j + j=1,k=1,k =j E [xj] E [yj] E [xk] E [yk] d2 = 3 d + d (d 1) + d d2 = 3 d . The expected value for the separation of two random vectors gives: Jm 2 2 β N M 2 (N 1) exp( β (d 2 max{ ξ xi , x i xi } M)) . (400) For the exponential storage we set M = 2 d 1. We see the Lipschitz constant Jm 2 decreases exponentially with the dimension. Therefore, f(ξ) x i is exponentially small after just one update. Therefore, the fixed point is well retrieved after one update. The retrieval error decreases exponentially with the separation i. Theorem A9 (Exponentially Small Retrieval Error). The retrieval error f(ξ) xi of pattern xi is bounded by f(ξ) xi 2 (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) M (401) and for xi x i 1 2 β M together with xi ξ 1 2 β M by xi x i 2 e (N 1) M exp( β i) . (402) Proof. We compute the retrieval error which is just f(ξ) xi . From Lemma A4 we have xi f(ξ) 2 ϵ M , (403) From Eq. (179) we have ϵ = (N 1) exp( β ( i 2 max{ ξ xi , x i xi } M)) . (404) For xi x i 1 2 β M and xi ξ 1 2 β M Eq. (404) gives ϵ e (N 1) M exp( β i) . (405) A.1.7 LEARNING ASSOCIATIONS We consider three cases of learning associations, i.e. three cases of how sets are associated. (i) Non of the sets is mapped in an associative space. The raw state pattern rn is the state (query) pattern ξn, i.e. ξn = rn, and the raw stored pattern ys is the stored pattern (key), i.e. xs = ys. (ii) Either one of the sets is mapped to the space of the other set or an association matrix is learned. (iia) The state patterns are equal to the raw patterns, i.e. ξn = rn, and raw stored patterns are mapped via W to the space of the state patterns, i.e. xs = W ys. (iib) The stored patterns are equal to the raw patterns, i.e. xs = ys, and raw state patterns are mapped via W to the space of the stored patterns, i.e. ξn = W T rn. (iic) The matrix W is an association matrix. We will compute the derivative of the new state pattern with respect to W , which is valid for all sub-cases (iib) (iic). (iii) Both set of patterns are mapped in a common associative space. A raw state pattern rn is mapped by WQ to a state pattern (query) ξn, that is ξn = WQrn. A raw stored pattern ys is mapped via WK to stored pattern (key) xs, that is xs = WKys. We will compute the derivative of the new state pattern with respect to both WQ and WK. A.1.7.1 Association of Raw Patterns No Mapping in an Associative Space. The sets are associated via their raw patterns, i.e. the raw state pattern rn is the state (query) pattern ξn, i.e. ξn = rn, and raw stored pattern ys is the stored pattern (key), i.e. xs = ys. There is no mapping in an associative space. The update rule is ξnew = X p , (406) where we used p = softmax(β XT ξ) . (407) The derivative with respect to ξ is ξ = β X diag(p) pp T XT (408) The derivative with respect to X is X = a p T + β X diag(p) pp T (ξT a) . (409) These derivatives allow to apply the chain rule if a Hopfield layer is integrated into a deep neural network. A.1.7.2 Learning an Association Matrix Only One Set is Mapped in an Associative Space. Only one of the sets R or Y is mapped in the space of the patterns of the other set. Case (a): the state patterns are equal to the raw patterns ξn = rn and raw stored patterns are mapped via W to the space of the state patterns, i.e. xs = W ys. Case (b): the stored patterns are equal to the raw patterns xs = ys and raw state patterns are mapped via W to the space of the stored patterns, i.e. ξn = W T rn. Case (c): the matrix W associates the sets R and Y . This case also includes that W T = W T KWQ, which is treated in next subsection. The next subsection focuses on a low rank approximation of W by defining the dimension dk of associative space and use the matrices W T K and WQ to define W , or equivalently to map R and Y into the associative space. From a mathematical point of view all these case are equal as they lead to the same update rule. Therefore, we consider in the following Case (a) with xs = W ys and ξn = rn. Still, the following formula are valid for all three cases (a) (c). The update rule is ξnew = W Y p , (410) where we used p = softmax(β Y T W T ξ) . (411) We consider the state (query) pattern ξ with result ξnew: ξnew = W Y p = W Y softmax(β Y T W T ξ) (412) For multiple updates this update rule has to be used. However for a single update, or the last update we consider a simplified update rule. Since new state vector ξnew is projected by a weight matrix WV to another vector, we consider the simplified update rule: ξnew = Y p = Y softmax(β Y T W T ξ) (413) The derivative with respect to W is ξnew = ξnew (W T ξ) (W T ξ) ξnew . (414) (W T ξ) = β Y diag(p) pp T Y T (415) ξnew = a . (416) We have the product of the 3-dimensional tensor (W T ξ) W with the vector a which gives a 2dimensional tensor, i.e. a matrix: ξnew = (W T ξ) W a = ξT a I . (417) W = β Y diag(p) pp T Y T (ξT a) = J (ξT a) , (418) where J is the Jacobian of the update rule defined in Eq. (59). To obtain the derivative of the full update rule Eq. (412) we have to add the term a p T Y T (419) and include the factor W to get W = a p T Y T + β W Y diag(p) pp T Y T (ξT a) (420) = a p T Y T + W J (ξT a) . A.1.7.3 Learning Two Association Mappings Both Sets are Mapped in an Associative Space. Both sets R and Y are mapped in an associative space. Every raw state pattern rn is mapped via WQ to a state pattern (query) ξn = WQrn. Every raw stored pattern ys is mapped via WK to a stored pattern (key) xs = WKys. In the last subsection we considered a single matrix W . For W T = W T KWQ we have the case of the last subsection. However in this subsection we are looking for a low rank approximation of W . Toward this end we define the dimension dk of associative space and use the matrices W T K and WQ to map to the associative space. The update rule is ξnew = X p , (421) where we used p = softmax(β XT ξ) . (422) We consider raw state patterns rn that are mapped to state patterns ξn = WQrn with QT = Ξ = WQR and raw stored pattern ys that are mapped to stored patterns xs = WKys with KT = X = WKY . The update rule is ξnew = WK Y p = WK Y softmax(β Y T W T KWQ r) . (423) Since new state vector ξnew is projected by a weight matrix WV to another vector, we consider the simplified update rule: ξnew = Y p = Y softmax(β Y T W T KWQ r) . (424) For the simplified update rule, the vector ξnew does not live in the associative space but in the space of raw stored pattern y. However WK would map it to the associative space. Derivative with respect to WQ. The derivative with respect to WQ is ξnew = ξnew (WQ r) (WQ r) ξnew . (425) (WQ r) = β Y diag(p) pp T Y T W T K (426) ξnew = a . (427) We have the product of the 3-dimensional tensor (WQr) WQ with the vector a which gives a 2dimensional tensor, i.e. a matrix: ξnew = (WQ r) WQ a = r T a I . (428) WQ = β Y diag(p) pp T Y T W T K(r T a) = J W T K(r T a) , (429) where J is the Jacobian of the update rule defined in Eq. (59). To obtain the derivative of the full update rule Eq. (423) we have to include the factor WK, then get WQ = β WK Y diag(p) pp T Y T W T K(r T a) = WK J W T K(r T a) . (430) Derivative with respect to WK. The derivative with respect to WK is ξnew = ξnew (W T KWQ r) (W T KWQ r) WK ξnew . (431) (W T KWQ r) = β Y diag(p) pp T Y T (432) ξnew = a . (433) We have the product of the 3-dimensional tensor (W r) WK with the vector a which gives a 2-dimensional tensor, i.e. a matrix: (W T KWQ r) WK ξnew = (W T KWQ r) WK a = W T Q r T a I . (434) WK = β Y diag(p) pp T Y T (W T Q r T a) = J (W T Q r T a) , (435) where J is the Jacobian of the update rule defined in Eq. (59). To obtain the derivative of the full update rule Eq. (423) we have to add the term a p T Y T (436) and to include the factor WK, then get WK = a p T Y T + β WK Y diag(p) pp T Y T (W T Q r T a) (437) = a p T Y T + WK J (W T Q r T a) . A.1.8 INFINITE MANY PATTERNS AND FORGETTING PATTERNS In the next subsection we show how the new Hopfield networks can be used for auto-regressive tasks by causal masking. In the following subsection, we introduce forgetting to the new Hopfield networks by adding a negative value to the softmax which is larger if the pattern was observed more in the past. A.1.8.1 Infinite Many Patterns. The new Hopfield networks can be used for auto-regressive tasks, that is time series prediction and similar. Causal masking masks out the future by a large negative value in the softmax. We assume to have infinite many stored patterns (keys) x1, x2, . . . that are represented by the infinite matrix X = (x1, x2, . . . , ) . (438) The pattern index is now a time index, that is, we observe xt at time t. The pattern matrix at time t is Xt = (x1, x2, . . . , xt) . (439) The query at time t is ξt. For Mt = max1 i t xt , the energy function at time t is Et Et = lse(β, XT t ξt) + 1 2ξT t ξt + β 1 ln t + 1 2M 2 t (440) i=1 exp(βx T i ξt) 2ξT t ξt + β 1 ln t + 1 2M 2 t . (441) The update rule is ξnew t = Xt pt = Xt softmax(β XT t ξt) , (442) where we used pt = softmax(β XT t ξt) . (443) We can use an infinite pattern matrix with an infinite softmax when using causal masking. The pattern matrix at time t is Xt = (x1, x2, . . . , xt, αξt, αξt, . . .) , (444) with the query ξt and α . The energy function at time t is Et Et = lse(β, XT t ξt) + 1 2ξT t ξt + β 1 ln t + 1 2M 2 t (445) i=1 exp(βx T i ξt) + i=t+1 exp( βα ξt 2) 2ξT t ξt + (446) β 1 ln t + 1 For α and ξt > 0 this becomes Et = lse(β, XT t ξt) + 1 2ξT t ξt + β 1 ln t + 1 2M 2 t (447) i=1 exp(βx T i ξt) 2ξT t ξt + β 1 ln t + 1 2M 2 t . (448) A.1.8.2 Forgetting Patterns. We introduce forgetting to the new Hopfield networks by adding a negative value in the softmax which increases with patterns that are more in the past. We assume to have infinite many patterns x1, x2, . . . that are represented by the infinite matrix X = (x1, x2, . . . , ) . (449) The pattern index is now a time index, that is, we observe xt at time t. The pattern matrix at time t is Xt = (x1, x2, . . . , xt) . (450) The query at time t is ξt. The energy function with forgetting parameter γ at time t is Et Et = lse(β, XT t ξt γ(t 1, t 2, . . . , 0)T ) + 1 2ξT t ξt + β 1 ln t + 1 2M 2 t (451) i=1 exp(βx T i ξt γ(t i)) 2ξT t ξt + β 1 ln t + 1 2M 2 t . (452) The update rule is ξnew t = Xt pt = Xt softmax(βXT t ξt) , (453) where we used pt = softmax(βXT t ξt) . (454) A.1.9 NUMBER OF SPURIOUS STATES The energy E is defined as E = lse(β, XT ξ) + 1 2ξT ξ + β 1 ln N + 1 i=1 exp(βx T i ξ) + β 1 ln N + 1 2M 2 . (456) Since the negative exponential function is strict monotonic decreasing, exp( E) has minima, where E has maxima, and has maxima, where as has minima E. exp( E) = exp(lse(β, XT ξ)) exp( 1 2ξT ξ) C (457) i=1 exp(βx T i ξ) i=1 exp(βx T i ξ) !β 1 exp( β 1 i=1 exp(β (x T i ξ 1 2 β x T i xi 1 2 β (ξ xi)T (ξ xi)) i=1 λ(xi, β) G(ξ; xi, β 1 I) where C is a positive constant, λ(xi, β) = exp( 1 2βx T i xi) and G(ξ; xi, β 1I) is the Gaussian with mean xi and covariance matrix β 1I. Since C is a positive constant and xβ 1 = exp(β 1 ln x) is strict monotonic for positive x, the minima of E are the maxima of i=1 λ(xi, β) G(ξ; xi, β 1 I) . (458) In Carreira-Perpiñán & Williams (2003) it was shown that Eq. (458) can have more than N modes, that is, more than N maxima. A.2 PROPERTIES OF SOFTMAX, LOG-SUM-EXPONENTIAL, LEGENDRE TRANSFORM, LAMBERT W FUNCTION For β > 0, the softmax is defined as Definition A1 (Softmax). p = softmax(βx) (459) pi = [softmax(βx)]i = exp(βxi) P k exp(βxk) . (460) We also need the log-sum-exp function (lse), defined as Definition A2 (Log-Sum-Exp Function). lse(β, x) = β 1 ln i=1 exp(βxi) We can formulate the lse in another base: βa = β ln a , (462) lse(β, x) = β 1 ln i=1 exp(β xi) = (βa ln a) 1 ln i=1 exp(βa ln a xi) = (βa) 1 loga i=1 aβa xi ! In particular, the base a = 2 can be used to speed up computations. Next, we give the relation between the softmax and the lse function. Lemma A18. The softmax is the gradient of the lse: softmax(βx) = xlse(β, x) . (464) In the next lemma we report some important properties of the lse function. Lemma A19. We define L := z T x β 1 N X i=1 zi ln zi (465) with L z T x. The lse is the maximum of L on the N-dimensional simplex D with D = {z | P i zi = 1, 0 zi}: lse(β, x) = max z D z T x β 1 N X i=1 zi ln zi . (466) The softmax p = softmax(βx) is the argument of the maximum of L on the N-dimensional simplex D with D = {z | P i zi = 1, 0 zi}: p = softmax(βx) = arg max z D z T x β 1 N X i=1 zi ln zi . (467) Proof. Eq. (466) is obtained from Equation (8) in Gao & Pavel (2017) and Eq. (467) from Equation (11) in Gao & Pavel (2017). From a physical point of view, the lse function represents the free energy in statistical thermodynamics (Gao & Pavel, 2017). Next we consider the Jacobian of the softmax and its properties. Lemma A20. The Jacobian Js of the softmax p = softmax(βx) is Js = softmax(βx) x = β diag(p) pp T , (468) which gives the elements [Js]ij = βpi(1 pi) for i = j βpipj for i = j . (469) Next we show that Js has eigenvalue 0. Lemma A21. The Jacobian Js of the softmax function p = softmax(βx) has a zero eigenvalue with eigenvector 1. j,j =i pipj j pj) = 0 . (470) Next we show that 0 is the smallest eigenvalue of Js, therefore Js is positive semi-definite but not (strict) positive definite. Lemma A22. The Jacobian Js of the softmax p = softmax(βξ) is symmetric and positive semidefinite. Proof. For an arbitrary z, we have z T diag(p) pp T z = X The last inequality hold true because the Cauchy-Schwarz inequality says (a T a)(b T b) (a T b)2, which is the last inequality with ai = zi pi and bi = pi. Consequently diag(p) pp T is positive semi-definite. Alternatively P i piz2 i (P i pizi)2 can be viewed as the expected second moment minus the mean squared which gives the variance that is larger equal to zero. The Jacobian is 0 < β times a positive semi-definite matrix, which is a positive semi-definite matrix. Moreover, the softmax is a monotonic map, as described in the next lemma. Lemma A23. The softmax softmax(βx) is monotone for β > 0, that is, (softmax(βx) softmax(βx ))T (x x ) 0 . (472) Proof. We use the version of mean value theorem Lemma A32 with the symmetric matrix Jm s = R 1 0 Js(λx + (1 λ)x ) dλ: softmax(x) softmax(x ) = Jm s (x x ) . (473) (softmax(x) softmax(x ))T (x x ) = (x x )T Jm s (x x ) 0 , (474) since Jm s is positive semi-definite. For all λ the Jacobians Js(λx + (1 λ)x ) are positive semi-definite according to Lemma A22. Since x T Jm s x = Z 1 0 x T Js(λx + (1 λ)x ) x dλ 0 (475) is an integral over positive values for every x, Jm s is positive semi-definite, too. Next we give upper bounds on the norm of Js. Lemma A24. For a softmax p = softmax(βx) with m = maxi pi(1 pi), the spectral norm of the Jacobian Js of the softmax is bounded: Js 2 2 m β , (476) Js 1 2 m β , (477) Js 2 m β . (478) In particular everywhere holds 2 β . (479) If pmax = maxi pi 1 ϵ 0.5, then for the spectral norm of the Jacobian holds Js 2 2 ϵ β 2 ϵ2 β < 2 ϵ β . (480) Proof. We consider the maximum absolute column sum norm A 1 = max j i |aij| (481) and the maximum absolute row sum norm j |aij| . (482) We have for A = Js = β diag(p) pp T j |aij| = β pi(1 pi) + X j,j =i pipj = β pi (1 2pi + X j pj) (483) = 2 β pi (1 pi) 2 m β , i |aij| = β pj (1 pj) + X i,i =j pjpi = β pj (1 2pj + X i pi) (484) = 2 β pj (1 pj) 2 m β . Therefore, we have Js 1 2 m β , (485) Js 2 m β , (486) Js 1 Js 2 m β . (487) The last inequality is a direct consequence of Hölder s inequality. For 0 pi 1, we have pi(1 pi) 0.25. Therefore, m 0.25 for all values of pi. If pmax 1 ϵ 0.5 (ϵ 0.5), then 1 pmax ϵ and for pi = pmax pi ϵ. The derivative x(1 x)/ x = 1 2x > 0 for x < 0.5, therefore x(1 x) increases with x for x < 0.5. Using x = 1 pmax and for pi = pmax x = pi, we obtain pi(1 pi) ϵ(1 ϵ) for all i. Consequently, we have m ϵ(1 ϵ). Using the bounds on the norm of the Jacobian, we give some Lipschitz properties of the softmax function. Lemma A25. The softmax function p = softmax(βx) is (β/2)-Lipschitz. The softmax function p = softmax(βx) is (2βm)-Lipschitz in a convex environment U for which m = maxx U maxi pi(1 pi). For pmax = minx U maxi pi = 1 ϵ, the softmax function p = softmax(βx) is (2βϵ)-Lipschitz. For β < 2m, the softmax p = softmax(βx) is contractive in U on which m is defined. Proof. The version of mean value theorem Lemma A32 states for the symmetric matrix Jm s = R 1 0 J(λx + (1 λ)x ) dλ: softmax(x) softmax(x ) = Jm s (x x ) . (488) According to Lemma A24 for all x = λx + (1 λ)x ) Js( x) 2 2 m β , (489) where m = maxi pi(1 pi). Since x U and x U we have x U, since U is convex. For m = maxx U maxi pi(1 pi) we have m m for all m. Therefore, we have Js( x) 2 2 m β (490) which also holds for the mean: Jm s 2 2 m β . (491) softmax(x) softmax(x ) Jm s 2 x x 2 m β x x . (492) From Lemma A24 we know m 1/4 globally. For pmax = minx U maxi pi = 1 ϵ we have according to Lemma A24: m ϵ. For completeness we present a result about cocoercivity of the softmax: Lemma A26. For m = maxx U maxi pi(1 pi), softmax function p = softmax(βx) is 1/(2mβ)- cocoercive in U, that is, (softmax(x) softmax(x ))T (x x ) 1 2 m β softmax(x) softmax(x ) . (493) In particular the softmax function p = softmax(βx) is (2/β)-cocoercive everywhere. With pmax = minx U maxi pi = 1 ϵ, the softmax function p = softmax(βx) is 1/(2βϵ)-cocoercive in U. Proof. We apply the Baillon-Haddad theorem (e.g. Theorem 1 in Gao & Pavel (2017)) together with Lemma A25. Finally, we introduce the Legendre transform and use it to describe further properties of the lse. We start with the definition of the convex conjugate. Definition A3 (Convex Conjugate). The Convex Conjugate (Legendre-Fenchel transform) of a function f from a Hilbert Space X to [ , ] is f which is defined as f (x ) = sup x X (x T x f(x)) , x X (494) See page 219 Def. 13.1 in Bauschke & Combettes (2017) and page 134 in Garling (2017). Next we define the Legendre transform, which is a more restrictive version of the convex conjugate. Definition A4 (Legendre Transform). The Legendre transform of a convex function f from a convex set X Rn to R (f : X R) is f , which is defined as f (x ) = sup x X (x T x f(x)) , x X , (495) X = x Rn | sup x X (x T x f(x)) < . (496) See page 91 in Boyd & Vandenberghe (2009). Definition A5 (Epi-Sum). Let f and g be two functions from X to ( , ], then the infimal convolution (or epi-sum) of f and g is f g : X [ , ] , x 7 inf y X (f(y) + g(x y)) (497) See Def. 12.1 in Bauschke & Combettes (2017). Lemma A27. Let f and g be functions from X to ( , ]. Then the following hold: 1. Convex Conjugate of norm squared 1 2 . 2 . (498) 2. Convex Conjugate of a function multiplied by scalar 0 < α R (α f) = α f (./α) . (499) 3. Convex Conjugate of the sum of a function and a scalar β R (f + β) = f β . (500) 4. Convex Conjugate of affine transformation of the arguments. Let A be a non-singular matrix and b a vector (f (Ax + b)) = f A T x b T A T x . (501) 5. Convex Conjugate of epi-sums (f g) = f + g . (502) Proof. 1. Since h(t) := t2 2 is a non-negative convex function and h(t) = 0 t = 0 we have because of Proposition 11.3.3 in Garling (2017) that h ( x ) = h ( x ). Additionally, by example (a) on page 137 we get for 1 < p < and 1 q = 1 that |t|p q . Putting all together we get the desired result. The same result can also be deduced from page 222 Example 13.6 in Bauschke & Combettes (2017). 2. Follows immediately from the definition since = α sup x X α f(x) = sup x X (x T x αf(x)) = (αf) (x ) 3. (f + β) := supx X x T x f(x) β =: f β (f (Ax + b)) (x ) = sup x X x T x f (Ax + b) (Ax + b)T A T x f (Ax + b) b T A T x y T A T x f (y) b T A T x = f A T x b T A T x 5. From Proposition 13.24 (i) in Bauschke & Combettes (2017) and Proposition 11.4.2 in Garling (2017) we get (f g) (x ) = sup x X x T x inf y X (f(y) g(x y)) = sup x,y X x T x f(y) g(x y) = sup x,y X y T x f(y) + (x y)T x g(x y) = f (x ) + g (x ) Lemma A28. The Legendre transform of the lse is the negative entropy function, restricted to the probability simplex and vice versa. For the log-sum exponential i=1 exp(xi) the Legendre transform is the negative entropy function, restricted to the probability simplex: f (x ) = Pn i=1 x i ln(x i ) for 0 x i and Pn i=1 x i = 1 otherwise . (504) For the negative entropy function, restricted to the probability simplex: f(x) = Pn i=1 xi ln(xi) for 0 xi and Pn i=1 xi = 1 otherwise . (505) the Legendre transform is the log-sum exponential f (x ) = ln i=1 exp(x i ) Proof. See page 93 Example 3.25 in Boyd & Vandenberghe (2009) and (Gao & Pavel, 2017). If f is a regular convex function (lower semi-continuous convex function), then f = f according to page 135 Exercise 11.2.3 in Garling (2017). If f is lower semi-continuous and convex, then f = f according to Theorem 13.37 (Fenchel-Moreau) in Bauschke & Combettes (2017). The log-sum-exponential is continuous and convex. Lemma A29. Let XXT be non-singular and X a Hilbert space. We define X = n a | 0 XT XXT 1 a , 1T XT XXT 1 a = 1 o . (507) Xv = a | a = XT ξ , ξ X . (508) The Legendre transform of lse(β, XT ξ) with ξ X is lse(β, XT ξ) (ξ ) = (lse(β, v)) XT XXT 1 ξ , (509) with ξ X and v Xv. The domain of lse(β, XT ξ) is X . Furthermore we have lse(β, XT ξ) = lse(β, XT ξ) . (510) Proof. We use the definition of the Legendre transform: lse(β, XT ξ) (ξ ) = sup ξ X ξT ξ lse(β, XT ξ) (511) XT ξ T XT XXT 1 ξ lse(β, XT ξ) = sup v Xv v T XT XXT 1 ξ lse(β, v) = sup v Xv v T v lse(β, v) = (lse(β, v)) (v ) = (lse(β, v)) XT XXT 1 ξ , where we used v = XT XXT 1 ξ . According to page 93 Example 3.25 in Boyd & Vandenberghe (2009), the equations for the maximum maxv Xv v T v lse(β, v) are solvable if and only if 0 < v = XT XXT 1 ξ and 1T v = 1T XT XXT 1 ξ = 1. Therefore, we assumed ξ X . The domain of lse(β, XT ξ) is X , since on page 93 Example 3.25 in Boyd & Vandenberghe (2009) it was shown that outside X the supv Xv v T v lse(β, v) is not bounded. p = softmax(βXT ξ) , (512) the Hessian of lse(β, XT ξ) 2lse(β, XT ξ) ξ2 = β X diag(p) pp T XT (513) is positive semi-definite since diag(p) pp T is positive semi-definite according to Lemma A22. Therefore, lse(β, XT ξ) is convex and continuous. If f is a regular convex function (lower semi-continuous convex function), then f = f according to page 135 Exercise 11.2.3 in Garling (2017). If f is lower semi-continuous and convex, then f = f according to Theorem 13.37 (Fenchel-Moreau) in Bauschke & Combettes (2017). Consequently we have lse(β, XT ξ) = lse(β, XT ξ) . (514) We introduce the Lambert W function and some of its properties, since it is needed to derive bounds on the storage capacity of our new Hopfield networks. Definition A6 (Lambert Function). The Lambert W function (Olver et al., 2010, (4.13)) is the inverse function of f(y) = yey . (515) The Lambert W function has an upper branch W0 for 1 y and a lower branch W 1 for y 1. We use W if a formula holds for both branches. We have W(x) = y yey = x . (516) We present some identities for the Lambert W function (Olver et al., 2010, (4.13)): Lemma A30. Identities for the Lambert W function are W(x) e W (x) = x , (517) W(xex) = x , (518) e W (x) = x W(x) , (519) e W (x) = W(x) en W (x) = x W(x) W0 (x ln x) = ln x for x 1 W 1 (x ln x) = ln x for x 1 W(x) = ln x W(x) for x 1 = n W(x) for n, x > 0 , (525) W(x) + W(y) = W x y 1 W(x) + 1 W(y) for x, y > 0 , (526) = ln x for 0 < x e , (527) = ln x for x > e , (528) e W ( ln x) = W( ln x) ln x for x = 1 . (529) We also present some special values for the Lambert W function (Olver et al., 2010, (4.13)): Lemma A31. W(0) = 0 , (530) W(e) = 1 , (531) = 1 , (532) W e1+e = e , (533) W (2 ln 2) = ln 2 , (534) W(1) = Ω, (535) W(1) = e W (1) = ln 1 W(1) = ln W(1) , (536) W( 1) 0.31813 + 1.33723i , (538) where the Omega constant Ωis (et t)2 + π2 1 0.56714329 . (539) We need in some proofs a version of the mean value theorem as given in the next lemma. Lemma A32 (Mean Value Theorem). Let U Rn be open, f : U Rm continuously differentiable, and x U as well as h Rn vectors such that the line segment x + th for 0 t 1 is in U. Then the following holds: f(x + h) f(x) = Z 1 0 J(x + t h) dt h , (540) where J is the Jacobian of f and the integral of the matrix is component-wise. Proof. Let f1, . . . , fm denote the components of f and define gi : [0, 1] R by gi(t) = fi(x + t h) , (541) then we obtain fi(x + h) fi(x) = gi(1) gi(0) = Z 1 0 g (t) dt (542) fi xj (x + t h) hj fi xj (x + t h) dt hj . The statement follows since the Jacobian J has as entries fi A.3 MODERN HOPFIELD NETWORKS: BINARY STATES (KROTOV AND HOPFIELD) A.3.1 MODERN HOPFIELD NETWORKS: INTRODUCTION A.3.1.1 Additional Memory and Attention for Neural Networks. Modern Hopfield networks may serve as additional memory for neural networks. Different approaches have been suggested to equip neural networks with an additional memory beyond recurrent connections. The neural Turing machine (NTM) is a neural network equipped with an external memory and an attention process (Graves et al., 2014). The NTM can write to the memory and can read from it. A memory network (Weston et al., 2014) consists of a memory together with the components: (1) input feature map (converts the incoming input to the internal feature representation) (2) generalization (updates old memories given the new input), (3) output feature map (produces a new output), (4) response (converts the output into the response format). Memory networks are generalized to an end-to-end trained model, where the arg max memory call is replaced by a differentiable softmax (Sukhbaatar et al., 2015a;b). Linear Memory Network use a linear autoencoder for sequences as a memory (Carta et al., 2020). To enhance RNNs with additional associative memory like Hopfield networks have been proposed (Ba et al., 2016a;b). The associative memory stores hidden states of the RNN, retrieves stored states if they are similar to actual ones, and has a forgetting parameter. The forgetting and storing parameters of the RNN associative memory have been generalized to learned matrices (Zhang & Zhou, 2017). LSTMs with associative memory via Holographic Reduced Representations have been proposed (Danihelka et al., 2016). Recently most approaches to new memories are based on attention. The neural Turing machine (NTM) is equipped with an external memory and an attention process (Graves et al., 2014). End to end memory networks (EMN) make the attention scheme of memory networks (Weston et al., 2014) differentiable by replacing arg max through a softmax (Sukhbaatar et al., 2015a;b). EMN with dot products became very popular and implement a key-value attention (Daniluk et al., 2017) for self-attention. An enhancement of EMN is the transformer (Vaswani et al., 2017a;b) and its extensions (Dehghani et al., 2018). The transformer had great impact on the natural language processing (NLP) community as new records in NLP benchmarks have been achieved (Vaswani et al., 2017a;b). MEMO uses the transformer attention mechanism for reasoning over longer distances (Banino et al., 2020). Current state-of-the-art for language processing is a transformer architecture called the Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018; 2019). A.3.1.2 Modern Hopfield networks: Overview. The storage capacity of classical binary Hopfield networks (Hopfield, 1982) has been shown to be very limited. In a d-dimensional space, the standard Hopfield model can store d uncorrelated patterns without errors but only Cd/ ln(d) random patterns with C < 1/2 for a fixed stable pattern or C < 1/4 if all patterns are stable (Mc Eliece et al., 1987). The same bound holds for nonlinear learning rules (Mazza, 1997). Using tricks-of-trade and allowing small retrieval errors, the storage capacity is about 0.138d (Crisanti et al., 1986; Hertz et al., 1991; Torres et al., 2002). If the learning rule is not related to the Hebb rule then up to d patterns can be stored (Abu-Mostafa & St Jacques, 1985). Using Hopfield networks with non-zero diagonal matrices, the storage can be increased to Cd ln(d) (Folli et al., 2017). In contrast to the storage capacity, the number of energy minima (spurious states, stable states) of Hopfield networks is exponentially in d (Tanaka & Edwards, 1980; Bruck & Roychowdhury, 1990; Wainrib & Touboul, 2013). Recent advances in the field of binary Hopfield networks (Hopfield, 1982) led to new properties of Hopfield networks. The stability of spurious states or metastable states was sensibly reduced by a Hamiltonian treatment for the new relativistic Hopfield model (Barra et al., 2018). Recently the storage capacity of Hopfield networks could be increased by new energy functions. Interaction functions of the form F(x) = xn lead to storage capacity of αndn 1, where αn depends on the allowed error probability (Krotov & Hopfield, 2016; 2018; Demircigil et al., 2017) (see (Krotov & Hopfield, 2018) for the non-binary case). Interaction functions of the form F(x) = xn lead to storage capacity of αn dn 1 cn ln d for cn > 2(2n 3)!! (Demircigil et al., 2017). Interaction functions of the form F(x) = exp(x) lead to exponential storage capacity of 2d/2 where all stored patterns are fixed points but the radius of attraction vanishes (Demircigil et al., 2017). It has been shown that the network converges with high probability after one update (Demircigil et al., 2017). A.3.2 ENERGY AND UPDATE RULE FOR BINARY MODERN HOPFIELD NETWORKS We follow (Demircigil et al., 2017) where the goal is to store a set of input data x1, . . . , x N that are represented by the matrix X = (x1, . . . , x N) . (543) The xi is pattern with binary components xij { 1, +1} for all i and j. ξ is the actual state of the units of the Hopfield model. Krotov and Hopfield (Krotov & Hopfield, 2016) defined the energy function E with the interaction function F that evaluates the dot product between patterns xi and the actual state ξ: i=1 F ξT xi (544) with F(a) = an, where n = 2 gives the energy function of the classical Hopfield network. This allows to store αndn 1 patterns (Krotov & Hopfield, 2016). Krotov and Hopfield (Krotov & Hopfield, 2016) suggested for minimizing this energy an asynchronous updating dynamics T = (Tj) for component ξj: Tj(ξ) := sgn h N X l =j xil ξl F xij + X l =j xil ξl i (545) While Krotov and Hopfield used F(a) = an, Demircigil et al. (Demircigil et al., 2017) went a step further and analyzed the model with the energy function F(a) = exp(a), which leads to an exponential storage capacity of N = 2d/2. Furthermore with a single update the final pattern is recovered with high probability. These statements are given in next theorem. Theorem A10 (Storage Capacity for Binary Modern Hopfield Nets (Demircigil et al. 2017)). Consider the generalized Hopfield model with the dynamics described in Eq. (545) and interaction function F given by F(x) = ex. For a fixed 0 < α < ln(2)/2 let N = exp (αd) + 1 and let x1, . . . , x N be N patterns chosen uniformly at random from { 1, +1}d. Moreover fix ϱ [0, 1/2). For any i and any exi taken uniformly at random from the Hamming sphere with radius ϱd centered in xi, S(xi, ϱd), where ϱd is assumed to be an integer, it holds that Pr ( i j : Tj (exi) = xij) 0 , if α is chosen in dependence of ϱ such that α < I(1 2ϱ) 2 ((1 + a) ln(1 + a) + (1 a) ln(1 a)) . Proof. The proof can be found in Demircigil et al. (2017). The number of patterns N = exp (αd) + 1 is exponential in the number d of components. The result Pr ( i j : Tj (exi) = xij) 0 means that one update for each component is sufficient to recover the pattern with high probability. The constraint α < I(1 2ϱ) 2 on α gives the trade-off between the radius of attraction ϱd and the number N = exp (αd) + 1 of pattern that can be stored. Theorem A10 in particular implies that Pr ( i j : Tj (xi) = xij) 0 as d , i.e. with a probability converging to 1, all the patterns are fixed points of the dynamics. In this case we can have α I(1) 2 = ln(2)/2. Krotov and Hopfield define the update dynamics Tj(ξ) in Eq. (545) via energy differences of the energy in Eq. (544). First we express the energy in Eq. (544) with F(a) = exp(a) (Demircigil et al., 2017) by the lse function. Then we use the mean value theorem to express the update dynamics Tj(ξ) in Eq. (545) by the softmax function. For simplicity, we set β = 1 in the following. There exists a v [ 1, 1] with Tj(ξ) = sgn h E(ξj = 1) + E(ξj = 1) i = sgn h exp(lse(ξj = 1)) exp(lse(ξj = 1)) i = sgn h (2ej)T ξE(ξj = v) i = sgn h exp(lse(ξj = v)) (2ej)T lse(ξj = v) = sgn h exp(lse(ξj = 1)) (2ej)T Xsoftmax(XT ξ(ξj = v)) i = sgn h [Xsoftmax(XT ξ(ξj = v))]j i = sgn h [Xp(ξj = v)]j i , where ej is the Cartesian unit vector with a one at position j and zeros elsewhere, [.]j is the projection to the j-th component, and p = softmax(XT ξ) . (547) A.4 HOPFIELD UPDATE RULE IS ATTENTION OF THE TRANSFORMER The Hopfield network update rule is the attention mechanism used in transformer and BERT models (see Fig. A.2). To see this, we assume N stored (key) patterns yi and S state (query) patterns ri that are mapped to the Hopfield space of dimension dk. We set xi = W T Kyi, ξi = W T Q ri, and multiply the result of our update rule with WV . The matrices Y = (y1, . . . , y N)T and R = (r1, . . . , r S)T combine the yi and ri as row vectors. We define the matrices XT = K = Y WK, ΞT = Q = RWQ, and V = Y WKWV = XT WV , where WK Rdy dk, WQ Rdr dk, WV Rdk dv. If β = 1/ dk and softmax RN is changed to a row vector, we obtain for the update rule Eq. (3) multiplied by WV : softmax 1/ p dk Q KT V = softmax β RWQ W T KY T Y WKWV . (548) The left part of Eq. (548) is the transformer attention. Besides the attention mechanism, Hopfield networks allow for other functionalities in deep network architectures, which we introduce via specific layers in the next section. The right part of Eq. (548) serves as starting point for these specific layers. Figure A.2: We generalized the energy of binary modern Hopfield networks for allowing continuous states while keeping fast convergence and storage capacity properties. We defined for the new energy also a new update rule that minimizes the energy. The new update rule is the attention mechanism of the transformer. Formulae are modified to express softmax as row vector as for transformers. "="-sign means "keeps the properties". A.5 EXPERIMENTS A.5.1 EXPERIMENT 1: ATTENTION IN TRANSFORMERS DESCRIBED BY HOPFIELD DYNAMICS A.5.1.1 Analysis of operating modes of the heads of a pre-trained BERT model. We analyzed pre-trained BERT models from Hugging Face Inc. (Wolf et al., 2019) according to these operating classes. In Fig. A.3 in the appendix the distribution of the pre-trained bert-base-cased model is depicted (for other models see appendix Section A.5.1.4). Operating classes (II) (large metastable states) and (IV) (small metastable states) are often observed in the middle layers. Operating class (I) (averaging over a very large number of patterns) is abundant in lower layers. Similar observations have been reported in other studies (Toneva & Wehbe, 2019a;b; Tay et al., 2020). Operating class (III) (medium metastable states) is predominant in the last layers. A.5.1.2 Experimental Setup. Transformer architectures are known for their high computational demands. To investigate the learning dynamics of such a model and at the same time keeping training time manageable, we adopted the BERT-small setting from ELECTRA (Clark et al., 2020). It has 12 layers, 4 heads and a reduced hidden size, the sequence length is shortened from 512 to 128 tokens and the batch size is reduced from 256 to 128. Additionally, the hidden dimension is reduced from 768 to 256 and the embedding dimension is reduced from 768 to 128 (Clark et al., 2020). The training of such a BERT-small model for 1.45 million update steps takes roughly four days on a single NVIDIA V100 GPU. 12 12 17 22 30 38 85 99 132 141 151 235 13 18 20 24 24 32 33 33 55 55 67 226 7 7 8 8 8 16 19 42 61 103 216 271 1 4 4 5 5 7 7 16 16 17 86 178 1 2 2 3 4 4 6 8 9 25 65 117 1 4 4 7 8 9 13 16 17 22 45 133 4 6 21 56 67 68 74 116 132 143 190 258 1 11 48 72 111 158 174 189 191 196 214 259 1 7 62 83 108 164 173 197 228 237 247 253 1 2 6 11 29 46 155 157 197 250 321 322 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns 0 100 200 300 400 k patterns Figure A.3: Analysis of operating modes of the heads of a pre-trained BERT model. For each head in each layer, the distribution of the minimal number k of patterns required to sum up the softmax values to 0.90 is displayed as a violin plot in a panel. k indicates the size of a metastable state. The bold number in the center of each panel gives the median k of the distribution. The heads in each layer are sorted according to k. Attention heads belong to the class they mainly operate in. Class (IV) in blue: Small metastable state or fixed point close to a single pattern, which is abundant in the middle layers (6, 7, and 8). Class (II) in orange: Large metastable state, which is prominent in middle layers (3, 4, and 5). Class (I) in red: Very large metastable state or global fixed point, which is predominant in the first layer. These heads can potentially be replaced by averaging operations. Class (III) in green: Medium metastable state, which is frequently observed in higher layers. We hypothesize that these heads are used to collect information required to perform the respective task. These heads should be the main target to improve transformer and BERT models. As the code base we use the transformers repository from Hugging Face, Inc (Wolf et al., 2019). We aim to reproduce the dataset of Devlin et al. (2019) as close as possible, which consists of the English Wikipedia dataset and the Toronto Book Corpus dataset (Zhu et al., 2015). Due to recent copyright claims the later is not publicly available anymore. Therefore, the pre-training experiments use an uncased snapshot of the original Book Corpus dataset. A.5.1.3 Hopfield Operating Classes of Transformer and BERT Models. To better understand how operation modes in attention heads develop, we tracked the distribution of counts k (see main paper) over time in a BERT-small model. At the end of training we visualized the count distribution, grouped into four classes (see Figure A.4). The thresholds for the classes were chosen according to the thresholds of Figure 2 in the main paper. However, they are divided by a factor of 4 to adapt to the shorter sequence length of 128 compared to 512. From this plot it is clear, that the attention in heads of Class IV commit very early to the operating class of small metastable states. A.5.1.4 Learning Dynamics of Transformer and BERT Models. To observe this behavior in the early phase of training, we created a ridge plot of the distributions of counts k for the first 20, 000 steps (see Figure A.5 (a)). This plot shows that the attention in heads of middle layers often change the operation mode to Class IV around 9, 000 to 10, 000 steps. At the same time the second big drop in the loss occurs. The question arises whether this is functionally important or whether it is an artefact which could be even harmful. To check if the attention mechanism is still able to learn after the change in the operation mode we analyzed the gradient flow through the softmax function. For every token we calculate the Frobenius norm of the Jacobian of the softmax over multiple samples. Then, for every head we plot the distribution of the norm (see Figure A.5(b)). The gradients with respect to the weights are determined by the Jacobian J defined in Eq. (59) as can be seen in Eq. (418), Eq. (429), and Eq. (435). We can see that the attention in heads of Class IV remain almost unchanged during the rest of the training. A.5.1.5 Attention Heads Replaced by Gaussian Averaging Layers. The self-attention mechanism proposed in Vaswani et al. (2017a) utilizes the softmax function to compute the coefficients of a convex combination over the embedded tokens, where the softmax is conditioned on the input. However, our analysis showed that especially in lower layers many heads perform averaging over a very large number of patterns. This suggests that at this level neither the dependency on the input nor a fine grained attention to individual positions is necessary. As an alternative to the original mechanism we propose Gaussian averaging heads which are computationally more efficient. Here, the softmax function is replaced by a discrete Gaussian kernel, where the location µ and the scale σ are learned. In detail, for a sequence length of N tokens we are given a vector of location parameters µ = (µ1, . . . , µN)T and a vector of corresponding scale parameters σ = (σ1, . . . , σN)T . We subdivide the interval [ 1, 1] into N equidistant supporting points {sj}N j=1, where sj = (j 1) 0.5 (N 1) 0.5 (N 1) . The attention [A]i,j from the i-th token to the j-th position is calculated as where zi normalizes the i-th row of the attention matrix A to sum up to one: For initialization we uniformly sample a location vector µ [ 1, 1]N and a scale vector σ [0.75, 1.25]N per head. A simple way to consider the individual position of each token at initialization is to use the supporting points µi = si (see Figure A.6). In practice no difference to the random initialization was observed. Number of parameters. Gaussian averaging heads can reduce the number of parameters significantly. For an input size of N tokens, there are 2 N parameters per head. In contrast, a standard self-attention head with word embedding dimension dy and projection dimension dk has two weight matrices 13 39 57 77 19 59 73 76 0 50 100 k patterns 0 50 100 k patterns 0 50 100 k patterns 0 50 100 k patterns Figure A.4: Left: Ridge plots of the distribution of counts k over time for BERT-small Right: Violin plot of counts k after 1, 450000 steps, divided into the four classes from the main paper. The thresholds were adapted to the shorter sequence length. (a) Densities (b) Norm of Jacobian Figure A.5: (a): change of count density during training is depicted for the first 20, 000 steps. (b): the corresponding distribution of the Frobenius norm of the Jacobian of the softmax function is depicted. The gradients with respect to the weights are determined by the Jacobian J defined in Eq. (59) as can be seen in Eq. (418), Eq. (429), and Eq. (435). WQ, WK Rdk dy, which together amount to 2 dk dy parameters. As a concrete example, the BERT-base model from Devlin et al. (2019) has an embedding dimension dy = 768, a projection dimension dk = 64 and a sequence length of N = 512. Compared to the Gaussian head, in this case (2 768 64)/(2 512) = 95.5 times more parameters are trained for the attention mechanism itself. Only for very long sequences (and given that the word embedding dimension stays the same) the dependence on N may become a disadvantage. But of course, due to the independence from the input the Gaussian averaging head is less expressive in comparison to the original attention mechanism. A recently proposed input independent replacement for self-attention is the so called Random Synthesizer (Tay et al., 2020). Here the softmax-attention is directly parametrized with an N N matrix. This amounts to 0.5 N more parameters than Gaussian averaging. 0 20 40 60 80 100 120 token Figure A.6: Attentions of a Gaussian averaging head at initialization for sequence length N = 128. Every line depicts one Gaussian kernel. Here, the location parameters are initialized with the value of the supporting points µi = si. A.5.2 EXPERIMENT 2: MULTIPLE INSTANCE LEARNING DATASETS. A.5.2.1 Immune Repertoire Classification. An architecture called Deep RC, is based on our modern Hopfield networks, for immune repertoire classification and compared to other machine learning approaches. For Deep RC, we consider immune repertoires as input objects, which are represented as bags of instances. In a bag, each instance is an immune receptor sequence and each bag can contain a large number of sequences. At its core, Deep RC consists of a modern Hopfield network that extracts information from each repertoire. The stored patterns (keys) are representations of the immune amino acid sequences (instances) that are obtained by an 1D convolutional network with position encoding. Each state pattern (query) is static and learned via backpropagation. For details see Widrich et al. (2020a;b). Our new Hopfield network has been integrated into a deep learning architecture for immune repertoire classification, a massive multiple instance learning task (Widrich et al., 2020a;b). Theorem 3 states that modern Hopfield networks possess an exponential storage capacity which enables to tackle massive multiple instance learning (MIL) problems (Dietterich et al., 1997). Immune repertoire classification (Emerson et al., 2017) typically requires to extract few patterns from a large set of sequences, the repertoire, that are indicative for the respective immune status. Most MIL methods fail due the large number of instances. Data is obtained by experimentally observed immune receptors as well as simulated sequences sequence motifs (Akbar et al., 2019; Weber et al., 2020) with low yet varying degrees of frequency are implanted. Four different categories of datasets are constructed: (a) Simulated immunosequencing data with implanted motifs, (b) immunosequencing data generated by long short-term memory (LSTM) with implanted motifs, (c) real-world immunosequencing data with implanted motifs, and (d) real-world immunosequencing data with known immune status (Emerson et al., 2017). Categories (a), (b), and (d) contain approx. 300,000 instances per immune repertoire. With over 30 billion sequences in total, this represents one of the largest multiple instance learning experiments ever conducted (Carbonneau et al., 2018). Despite the massive number of instances as well as the low frequency of sequences indicative of the respective immune status, deep learning architectures with modern Hopfield networks outperform all competing methods with respect to average area under the ROC curve in all four categories, (a), (b), (c) and (d) (for details see Widrich et al. (2020a)). We evaluate and compare the performance of Deep RC to a set of machine learning methods that serve as baseline, were suggested, or can readily be adapted to immune repertoire classification. The methods comprise (i) known motif, which counts how often the known implanted motifs occur, (ii) Support Vector Machine (SVM) approach that uses a fixed mapping from a bag of sequences to the corresponding k-mer counts and used the Min Max and Jaccard kernel, (iii) k-Nearest Neighbor (KNN) with k-mer representation, transforming Min Max and Jaccard kernel to distances, (iv) logistic regression on the k-mer representation, (v) burden test that first identifies sequences or k-mers and then computes a burden score per individual, and (vi) logistic multiple instance learning (l MIL). On the real-world dataset Deep RC achieved an AUC of 0.832 0.022, followed by the SVM with Min Max kernel (AUC 0.825 0.022) and the burden test with an AUC of 0.699 0.041. Overall on all datasets, Deep RC outperformed all competing methods with respect to average AUC (see Widrich et al. (2020a;b)). Table A.1 reports the average performance in the simulated immunosequencing datasets (last column) and the performance on datasets of the remaining three categories. Deep RC outperforms all competing methods with respect to average AUC. Across categories, the runner-up methods are either the SVM for MIL problems with Min Max kernel or the burden test. Real-world Real-world data with implanted signals LSTM-generated data Simulated CMV OM 1% OM 0.1% MM 1% MM 0.1% 10% 1% 0.5% 0.1% 0.05% avg. Deep RC 0.832 0.022 1.00 0.00 0.98 0.01 1.00 0.00 0.94 0.01 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 0.846 0.223 SVM (MM) 0.825 0.022 1.00 0.00 0.58 0.02 1.00 0.00 0.53 0.02 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 0.99 0.01 0.827 0.210 SVM (J) 0.546 0.021 0.99 0.00 0.53 0.02 1.00 0.00 0.57 0.02 0.98 0.04 1.00 0.00 1.00 0.00 0.90 0.04 0.77 0.07 0.550 0.080 KNN (MM) 0.679 0.076 0.74 0.24 0.49 0.03 0.67 0.18 0.50 0.02 0.70 0.27 0.72 0.26 0.73 0.26 0.54 0.16 0.52 0.15 0.634 0.129 KNN (J) 0.534 0.039 0.65 0.16 0.48 0.03 0.70 0.20 0.51 0.03 0.70 0.29 0.61 0.24 0.52 0.16 0.55 0.19 0.54 0.19 0.501 0.007 Log. regr. 0.607 0.058 1.00 0.00 0.54 0.04 0.99 0.00 0.51 0.04 1.00 0.00 1.00 0.00 0.93 0.15 0.60 0.19 0.43 0.16 0.826 0.211 Burden test 0.699 0.041 1.00 0.00 0.64 0.05 1.00 0.00 0.89 0.02 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 0.79 0.28 0.549 0.074 Log. MIL (KMER) 0.582 0.065 0.54 0.07 0.51 0.03 0.99 0.00 0.62 0.15 1.00 0.00 0.72 0.11 0.64 0.14 0.57 0.15 0.53 0.13 0.665 0.224 Log. MIL (TCRβ) 0.515 0.073 0.50 0.03 0.50 0.02 0.99 0.00 0.78 0.03 0.54 0.09 0.57 0.16 0.47 0.09 0.51 0.07 0.50 0.12 0.501 0.016 Known motif b. 1.00 0.00 0.70 0.03 0.99 0.00 0.62 0.04 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 0.890 0.168 Known motif c. 0.92 0.00 0.56 0.03 0.65 0.03 0.52 0.03 1.00 0.00 1.00 0.00 0.99 0.01 0.72 0.09 0.63 0.09 0.738 0.202 Table A.1: Results immune repertoire classification across all datasets. Results are given in terms of AUC of the competing methods on all datasets. The reported errors are standard deviations across 5 cross-validation (CV) folds (except for the column Simulated ). Real-world CMV: Average performance over 5 CV folds on the cytomegalovirus (CMV) dataset Emerson et al. (2017). Real-world data with implanted signals: Average performance over 5 CV folds for each of the four datasets. A signal was implanted with a frequency (=wittness rate) of 1% or 0.1%. Either a single motif ( OM ) or multiple motifs ( MM ) were implanted. LSTM-generated data: Average performance over 5 CV folds for each of the 5 datasets. In each dataset, a signal was implanted with a frequency of 10%, 1%, 0.5%, 0.1%, and 0.05%, respectively. Simulated: Here we report the mean over 18 simulated datasets with implanted signals and varying difficulties. The error reported is the standard deviation of the AUC values across the 18 datasets. A.5.2.2 Multiple Instance Learning Benchmark Datasets. Classical benchmarking datasets comprise UCSB breast cancer classification (Kandemir et al., 2014), and the Elephant, Fox, Tiger datasets (Andrews et al., 2003). Elephant, Fox and Tiger are MIL datasets for image annotation which comprise color images from the Corel dataset that have been preprocessed and segmented. An image consists of a set of segments (or blobs), each characterized by color, texture and shape descriptors. The datasets have 100 positive and 100 negative example images. The latter have been randomly drawn from a pool of photos of other animals. Elephant has 1391 instances and 230 features. Fox has 1320 instances and 230 features. Tiger has 1220 instances and 230 features. Furthermore, we use the UCSB breast cancer classification (Kandemir et al., 2014) dataset, which consists of 2,002 instances across 58 input objects. An instance represents a patch of a histopathological image of cancerous or normal tissue. The layer Hopfield Pooling is used, which allows for computing a per-input-object representation by parameter values learning rates {10 3, 10 5} learning rate decay (γ) {0.98, 0.96, 0.94} embedding layers {1, 2, 3} layer widths {32, 64, 256, 1024, 2048} number of heads {8, 12, 16, 32} head dimensions {16, 32, 64} scaling factors {0.1, 1.0, 10.0} hidden dimensions {32, 64, 128} bag dropout {0.0, 0.75} Table A.2: Hyperparameter search-space of a manual hyperparameter selection on the respective validation sets of the Elephant, Fox, Tiger and UCSB breast cancer datasets. extracting an average of instances that are indicative for one of the two classes. The input to the Hopfield Pooling layer is a set of embedded instances Y and a trainable but fixed state (query) pattern Q used for averaging of class-indicative instances. This averaging enables a compression of variable-sized bags to a fixed-sized representation to discriminate the bags. We performed a manual hyperparameter search on a validation set. In detail, we used the following architecture to perform the given task on the Elephant, Fox, Tiger and UCSCB breast cancer datasets: (I) we apply fully connected linear embedding layers with Re LU activation. (II) The output of this embedding serves as the input to our Hopfield Pooling layer where the above described pooling operation is performed. (III) Thereafter we use Re LU - Linear blocks as the final linear output layers that perform the classification. Among other hyperparameters, different hidden layer widths (for the fully connected preand post-Hopfield Pooling layers), learning rates and batch sizes were tried. Additionally our focus resided on the hyperparameters of the Hopfield Pooling layer. Among those were the number of heads, the head dimension and the scaling factor β. All models were trained for 160 epochs using the Adam W optimizer (Loshchilov & Hutter, 2017) with exponential learning rate decay (see Table A.2), and validated by 10-fold nested cross validation repeated five times with different splits on the data sets. The reported ROC AUC scores are the average of these repetitions. As overfitting imposed quite a problem, bag dropout was applied as the regularization technique of choice. A.5.3 EXPERIMENT 3: CLASSIFICATION ON SMALL UCI BENCHMARK DATASETS A.5.3.1 Motivation. Datasets with a small number of samples, like the UCI benchmark datasets, are particularly difficult for neural networks to generalize on. In contrast to their performance on larger datasets, they are consistently outperformed by methods like e.g. gradient boosting, random forests (RF) and support vector machines (SVMs). Finding samples or even learning prototypes that are highly indicative for the class of a sample (query) suggest the use of Hopfield networks. We applied a modern Hopfield network via the layer Hopfield. The input vector is mapped to R using a self-normalizing net (SNN) and WK is learned, where the dimension of WK (the number of stored fixed pattern) is a hyperparameter. The output Z of Hopfield enters the output layer. A.5.3.2 Methods compared. Modern Hopfield networks via the layer Hopfield are compared to 17 groups of methods (Fernández-Delgado et al., 2014; Klambauer et al., 2017a): 1. Support Vector Machines 2. Random Forest 3. Multivariate adaptive regression splines (MARS) 4. Boosting 5. Rule-based Methods 6. Logistic and Multinomial Regression (LMR) 7. Discriminant Analysis (DA) 9. Nearest Neighbor 10. Decision Trees 11. Other Ensembles 12. Neural Networks (standard NN, Batch Norm, Weigh Norm, MSRAinit, Layer Norm, Res Net, Self-Normalizing Nets) 13. Bayesian Methods 14. Other Methods 15. Generalized linear models (GLM) 16. Partial Least Squares and Principal Component Regression (PLSR) 17. Stacking (Wolpert) A.5.3.3 Experimental design and implementation details. As specified in the main paper, we consider 75 datasets of the UC Irvine Machine Learning Repository, which contain less than 1, 000 samples per dataset, following the dataset separation into large and small dataset in Klambauer et al. (2017a). On each dataset, we performed a grid-search to determine the best hyperparameter setting and model per dataset. The hyperparameter search-space of the grid-search is listed in Table A.3. All models were trained for 100 epochs with a mini-batch size of 4 samples using the cross entropy loss and the Py Torch SGD module for stochastic gradient descent without momentum and without weight decay or dropout. After each epoch, the model accuracy was computed on a separated validation set. Using early stopping, the model with the best validation set accuracy averaged over 16 consecutive epochs was selected as final model. This final model was then evaluated against a separated test set to determine the accuracy, as reported in Tables 2 and Table uci_detailed_results.csv in the supplemental materials. As network architecture, we use {0, 1, 7} fully connected embedding layers with SELU Klambauer et al. (2017a) activation functions and {32, 128, 1024} hidden units per embedding layer. These embedding layers are followed by the layer Hopfield. The number of hidden units is also used as number of dimensions for the Hopfield association space with a number of {1, 32} heads. The layer Hopfield is followed by a mapping to the output vector, which has as dimension the number of classes. Finally, the softmax function is applied to obtain the predicted probability for a class. parameter values learning rates {0.05} embedding layers {0, 1, 7} hidden units {32, 128, 1024} heads {1, 32} β {1.0, 0.1, 0.001} # stored patterns {1, 8} n_classes Table A.3: Hyperparameter search-space for grid-search on small UCI benchmark datasets. All models were trained for 100 epochs using stochastic gradient descent with early stopping based on the validation set accuracy and a minibatch size of 4 samples. The number of stored patterns is depending on the number of target classes of the individual tasks. A.5.3.4 Results. We compared the performance of 25 methods based on their method rank. For this we computed the rank per method per dataset based on the accuracy on the test set, which was then averaged over all 75 datasets for each method to obtain the method rank. For the baseline methods we used the scores summarized by (Klambauer et al., 2017a). parameter values beta {0.0001, 0.001, 0.01, 0.1, 0.2, 0.3} learning rates {0.0002} heads {1, 32, 128, 512} dropout {0.0, 0.1, 0.2} state-pattern bias {0.0, 0.1, 0.125, 0.15, 0.2} association-activation {None, Leaky Re LU } stateand stored-pattern static {False, True} normalize stateand stored-pattern {False, True} normalize association projection {False, True} learnable stored-pattern {False, True} Table A.4: Hyperparameter search-space for grid-search on HIV, BACE, BBBP and SIDER. All models were trained if applicable for 4 epochs using Adam and a batch size of 1 sample. A.5.4 EXPERIMENT 4: DRUG DESIGN BENCHMARK DATASETS A.5.4.1 Experimental design and implementation details. We test Hopfield layers on 4 classification datasets from Molecule Net (Wu et al., 2017), which are challenging for deep learning methods. The first dataset is HIV, which was introduced by the Drug Therapeutics Program (DTP) AIDS Antiviral Screen. The second dataset is BACE, which has IC50 measurements for binding affinities of inhibitors (molecules) to the human β-secretase 1 (BACE-1). The third dataset is BBBP (blood-brain barrier permeability), which stems from modeling and predicting the blood-brain barrier permeability (Martins et al., 2012). The fourth dataset is SIDER (Side Effect Resource) Kuhn et al. (2016) and contains 1427 approved drugs. These datasets represent four areas of modeling tasks in drug discovery, concretely to develop accurate models for predicting a) new anti-virals (HIV), b) new protein inhibitors (BACE), c) metabolic effects (BBBP), and d) side effects of a chemical compound (SIDER). We implemented a Hopfield layer Hopfield Layer, in which we used the training-input as storedpattern Y or key, the training-label as pattern-projection Y WV or value and the input as state-pattern R or query. As described in section A.6 by concatenation of input zi and target ti the matrices WK and WV can be designed such that inside the softmax the input zi is used and outside the softmax the target ti. All hyperparameters were selected on separate validation sets and we selected the model with the highest validation AUC on five different random splits. A.5.4.2 Results. We compared the Hopfield layer Hopfieldlayer to Support Vector Machines (SVMs) (Cortes & Vapnik, 1995; Schölkopf & Smola, 2002), Extreme Gradient Boosting (XGBoost) (Chen & Guestrin, 2016), Random Forest (RF) (Breiman, 2001), Deep Neural Networks (DNNs) (Le Cun et al., 2015; Schmidhuber, 2015), and to graph neural networks (GNN) like Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016), Graph Attention Networks (GATs) (Veli ckovi c et al., 2018), Message Passing Neural Networks (MPNNs) (Gilmer et al., 2017), and Attentive FP (Xiong et al., 2020). Our architecture with Hopfield Layer has reached state-of-theart for predicting side effects on SIDER 0.672 0.019 as well as for predicting β-secretase BACE 0.902 0.023. See Table A.5 for all results, where the results of other methods are taken from Jiang et al. (2020). Table A.5: Results on drug design benchmark datasets. Predictive performance (ROCAUC) on test set as reported by Jiang et al. (2020) for 50 random splits Model HIV BACE BBBP SIDER SVM 0.822 0.020 0.893 0.020 0.919 0.028 0.630 0.021 XGBoost 0.816 0.020 0.889 0.021 0.926 0.026 0.642 0.020 RF 0.820 0.016 0.890 0.022 0.927 0.025 0.646 0.022 GCN 0.834 0.025 0.898 0.019 0.903 0.027 0.634 0.026 GAT 0.826 0.030 0.886 0.023 0.898 0.033 0.627 0.024 DNN 0.797 0.018 0.890 0.024 0.898 0.033 0.627 0.024 MPNN 0.811 0.031 0.838 0.027 0.879 0.037 0.598 0.031 Attentive FP 0.822 0.026 0.876 0.023 0.887 0.032 0.623 0.026 Hopfield (ours) 0.815 0.023 0.902 0.023 0.910 0.026 0.672 0.019 A.6 PYTORCH IMPLEMENTATION OF HOPFIELD LAYERS The implementation is available at: https://github.com/ml-jku/hopfield-layers A.6.1 INTRODUCTION In this section, we describe the implementation of Hopfield layers in Py Torch (Paszke et al., 2017; 2019) and, additionally, provide a brief usage manual. Possible applications for a Hopfield layer in a deep network architecture comprise: multiple instance learning (MIL) (Dietterich et al., 1997), processing of and learning with point sets (Qi et al., 2017a;b; Xu et al., 2018), set-based and permutation invariant learning (Guttenberg et al., 2016; Ravanbakhsh et al., 2016; Zaheer et al., 2017; Korshunova et al., 2018; Ilse et al., 2018; Zhai et al., 2020), attention-based learning (Vaswani et al., 2017a), associative learning, natural language processing, sequence analysis and time series prediction, and storing and retrieving reference or experienced data, e.g. to store training data and retrieve it by the model or to store experiences for reinforcement learning. The Hopfield layer in a deep neural network architecture can implement: a memory (storage) with associative retrieval (Danihelka et al., 2016; Ba et al., 2016a), conditional pooling and averaging operations (Wang et al., 2018; Ilse et al., 2020), combining data by associations (Agrawal et al., 1993), associative credit assignment (e.g. Rescorla-Wagner model or value estimation) (Sutton & Barto, 2018), and attention mechanisms (Vaswani et al., 2017a; Bahdanau et al., 2014). In particular, a Hopfield layer can substitute attention layers in architectures of transformer and BERT models. The Hopfield layer is designed to be used as plug-in replacement for existing layers like pooling layers (max-pooling or average pooling), permutation equivariant layers (Guttenberg et al., 2016; Ravanbakhsh et al., 2016), GRU & LSTM layers, and attention layers. In contrast to classical Hopfield networks, the Hopfield layer is based on the modern Hopfield networks with continuous states that have increased storage capacity, as discussed in the main paper. Like classical Hopfield networks, the dynamics of the single heads of a Hopfield layer follow a energy minimization dynamics. The energy minimization empowers our Hopfield layer with several advantages over other architectural designs like memory cells, associative memory, or attention mechanisms. For example, the Hopfield layer has more functionality than a transformer self-attention layer (Vaswani et al., 2017a) as described in Sec. A.6.2. Possible use cases are given in Sec. A.6.3. Source code will be provided under github. A.6.2 FUNCTIONALITY Non-standard functionalities that are added by a Hopfield layer are Association of two sets, Multiple Updates for precise fixed points, Variable Beta that determines the kind of fixed points, Dimension of the associative space for controlling the storage capacity, Static Patterns for fixed pattern search, and Pattern Normalization to control the fixed point dynamics by norm of the patterns and shift of the patterns. A functional sketch of our Hopfield layer is shown in Fig. A.7. Association of two sets. The Hopfield layer makes it possible to associate two sets of vectors. This general functionality allows for transformer-like self-attention, for decoder-encoder attention, for time series prediction (maybe with positional encoding), for sequence analysis, for multiple instance learning, for learning with point sets, for combining data sources by associations, for constructing a memory, for averaging and pooling operations, and for many more. The first set of vectors consists of S raw state patterns R = (r1, . . . , r S)T with rs Rdr and the second set of vectors consists of N raw stored patterns Y = (y1, . . . , y N)T with yi Rdy. Both the S raw state patterns and N raw stored patterns are mapped to an associative space in Rdk via the matrices WQ Rdr dk and WK Rdy dk, respectively. We define a matrix Q (ΞT ) of state patterns ξn = WQrn in an associative space Rdk and a matrix K (XT ) of stored patterns xi = WKys in the associative space Rdk: Q = ΞT = R WQ , (549) K = XT = Y WK . (550) In the main paper, Eq. (3) defines the novel update rule: ξnew = f(ξ) = X softmax(β XT ξ) , (551) For multiple patterns, Eq. (3) becomes: Ξnew = f(Ξ) = X softmax(β XT Ξ) , (552) where Ξ = (ξ1, . . . , ξN) is the matrix of N state (query) patterns, X is the matrix of stored (key) patterns, and Ξnew is the matrix of new state patterns, which are averages over stored patterns. A new state pattern can also be very similar to a single stored pattern, in which case we call the stored pattern to be retrieved. These matrices allow to rewrite Eq. (552) as: (Qnew)T = KT softmax(β K QT ) . (553) For β = 1/ dk and changing in Eq. (553) softmax RN to a row vector (and evaluating a row vector), we obtain: Qnew = softmax(1/ p dk Q KT ) K , (554) where Qnew is again the matrix of new state patterns. The new state patterns Ξnew are projected via WV to the result patterns Z = Ξnew WV , where WV Rdk dv. With the pattern projection V = KWV , we obtain the update rule Eq. (10) from the main paper: Z = softmax(1/ p dk Q KT ) V . (555) Multiple Updates. The update Eq. (553) can be iteratively applied to the initial state ξ of every Hopfield layer head. After the last update, the new states Ξnew are projected via WV to the result patterns Z = Ξnew WV . Therefore, the Hopfield layer allows multiple update steps in the forward pass without changing the number of parameters. The number of update steps can be given for every Hopfield head individually. Furthermore, it is possible to set a threshold for the number of updates of every Hopfield head based on ξ ξnew 2. In the general case of multiple initial states Ξ, the maximum over the individual norms is taken. Variable β. In the main paper, we have identified β as a crucial parameter for the fixed point dynamics of the Hopfield network, which governs the operating mode of the attention heads. In appendix, e.g. in Lemma A7 or in Eq. (102) and Eq. (103), we showed that the characteristics of the fixed points of the new modern Hopfield network are determined by: β, M (maximal pattern norm), mmax (spread of the similar patterns), and mx (center of the similar patterns). Low values of β induce global averaging and higher values of β metastable states. In the transformer attention, the β parameter is set to β = 1/ dk as in Eq. (555). The Hopfield layer, however, allows to freely choose β > 0, since the fixed point dynamics does not only depend on the dimension of the associative space dk. Additionally, β heavily influences the gradient flow to the matrices WQ and WK. Thus, finding the right β for the respective application can be crucial. Variable dimension of the associative space. Theorem A5 says that the storage capacity of the modern Hopfield network grows exponentially with the dimension of the associative space. However higher dimension of the associative space also means less averaging and smaller metastable states. The dimension of the associative space trades off storage capacity against the size of metastable states, e.g. over how many pattern is averaged. In Eq. (550) and in Eq. (549), we assumed N raw state patterns R = (r1, . . . , r N)T and S raw stored patterns Y = (y1, . . . , y S)T that are mapped to a dk-dimensional associative space via the matrices WQ Rdr dk and WK Rdy dk, respectively. In the associative space Rdk, we obtain the state patterns Q = ΞT = RWQ and the stored patterns K = XT = Y WK. The Hopfield view relates the dimension dk to the number of input patterns N that have to be processed. The storage capacity depends exponentially on the dimension dk (the dimension of the associative space) and the size to metastable states is governed by this dimension, too. Consequently, dk should be chosen with respect to the number N of patterns one wants to store and the desired size of metastable states, which is the number of patterns one wants to average over. For example, if the input consists of many low dimensional input patterns, it makes sense to project the patterns into a higher dimensional space to allow a proper fixed point dynamics. Intuitively, this coincides with the construction of a richer feature space for the patterns. Static Patterns. In Eq. (550) and Eq. (549), the N raw state patterns R = (r1, . . . , r N)T and S raw stored patterns Y = (y1, . . . , y S)T are mapped to an associative space via the matrices WQ Rdr dk and WK Rdy dk, which gives the state patterns Q = ΞT = RWQ and the stored patterns K = XT = Y WK. We allow for static state and static stored patterns. Static pattern means that the pattern does not depend on the network input, i.e. it is determined by the bias weights and remains constant across different network inputs. Static state patterns allow to determine whether particular fixed patterns are among the stored patterns and vice versa. The static pattern functionality is typically needed if particular patterns must be identified in the data, e.g. as described for immune repertoire classification in the main paper, where a fixed dk-dimensional state vector ξ is used. Pattern Normalization. In the appendix, e.g. in Lemma A7 or in Eq. (102) and Eq. (103), we showed that the characteristics of the fixed points of the new modern Hopfield network are determined by: β, M (maximal pattern norm), mmax (spread of the similar patterns), and mx (center of the similar patterns). We already discussed the parameter β while the spread of the similar patterns mmax is given by the data. The remaining variables M and mx that both control the fixed point dynamics are adjusted pattern normalization. M is the maximal pattern norm and mx the center of the similar patterns. Theorem A5 says that larger M allows for more patterns to be stored. However, the size of metastable states will decrease with increasing M. The vector mx says how well the (similar) patterns are centered. If the norm mx is large, then this leads to smaller metastable states. The two parameters M and mx are controlled by pattern normalization and determine the size and convergence properties of metastable states. These two parameters are important for creating large gradients if heads start with global averaging which has small gradient. These two parameters can shift a head towards small metastable states which have largest gradient as shown in Fig. A.5(b). We allow for three different pattern normalizations: pattern normalization of the input patterns, pattern normalization after mapping into the associative space, no pattern normalization. The default setting is a pattern normalization of the input patterns. A.6.3 USAGE As outlined in Sec. A.6.1, there are a variety of possible use cases for the Hopfield layer, e.g. to build memory networks or transformer models. The goal of the implementation is therefore to provide an easy to use Hopfield module that can be used in a wide range of applications, be it as part of a larger architecture or as a standalone module. Consequently, the focus of the Hopfield layer interface is set on its core parameters: the association of two sets, the scaling parameter β, the maximum number of updates, the dimension of the associative space, the possible usage of static patterns, and the pattern normalization. The integration into the Py Torch framework is built such that with all the above functionalities disabled, the Hopfield Encoder Layer and the Hopfield Decoder Layer , both extensions of the Hopfield module, can be used as a one-to-one plug-in replacement for the Transformer Encoder Layer and the Transformer Decoder Layer, respectively, of the Py Torch transformer module. The Hopfield layer can be used to implement or to substitute different layers: Pooling layers: We consider the Hopfield layer as a pooling layer if only one static state (query) pattern exists. Then, it is de facto a pooling over the sequence, which results from the softmax values applied on the stored patterns. Therefore, our Hopfield layer can act as a pooling layer. Permutation equivariant layers: Our Hopfield layer can be used as a plug-in replacement for permutation equivariant layers. Since the Hopfield layer is an associative memory it assumes no dependency between the input patterns. GRU & LSTM layers: Our Hopfield layer can be used as a plug-in replacement for GRU & LSTM layers. Optionally, for substituting GRU & LSTM layers, positional encoding might be considered. Attention layers: Our Hopfield layer can act as an attention layer, where state (query) and stored (key) patterns are different, and need to be associated. Finally, the extensions of the Hopfield layer are able to operate as a self-attention layer (Hopfield Encoder Layer) and as cross-attention layer (Hopfield Decoder Layer), as described in (Vaswani et al., 2017a). As such, it can be used as building block of transformer-based or general architectures. multiple updates normalize yes normalize yes normalize yes project yes normalize yes normalize yes normalize yes project project Figure A.7: A flowchart of the Hopfield layer. First, the raw state (query) patterns R and the raw stored (key) patterns Y are optionally normalized (with layer normalization), projected and optionally normalized (with layer normalization) again. The default setting is a layer normalization of the input patterns, and no layer normalization of the projected patterns. The raw stored patterns Y can in principle be also two different input tensors. Optionally, multiple updates take place in the projected space of Q and K. This update rule is obtained e.g. from the full update Eq. (423) or the simplified update Eq. (424) in the appendix. Y. Abu-Mostafa and J.-M. St Jacques. Information capacity of the Hopfield model. IEEE Transactions on Information Theory, 31, 1985. doi: 10.1109/tit.1985.1057069. R. Agrawal, T. Imieliundefinedski, and A. Swami. Mining association rules between sets of items in large databases. SIGMOD Rec., 22(2):207 216, 1993. doi: 10.1145/170036.170072. R. Akbar, P. A. Robert, M. Pavlovi c, J. R. Jeliazkov, I. Snapkov, A. Slabodkin, C. R. Weber, L. Scheffer, E. Miho, I. H. Haff, et al. A compact vocabulary of paratope-epitope interactions enables predictability of antibody-antigen binding. bio Rxiv, 2019. F. Alzahrani and A. Salem. Sharp bounds for the lambert w function. Integral Transforms and Special Functions, 29(12):971 978, 2018. S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning. In S. Becker, S. Thrun, and K. Obermayer (eds.), Advances in Neural Information Processing Systems 15, pp. 577 584. MIT Press, 2003. J. Ba, G. E. Hinton, V. Mnih, J. Z. Leibo, and C. Ionescu. Using fast weights to attend to the recent past. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 4331 4339. Curran Associates, Inc., 2016a. J. Ba, G. E. Hinton, V. Mnih, J. Z. Leibo, and C. Ionescu. Using fast weights to attend to the recent past. Ar Xiv, 1610.06258, 2016b. D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. Ar Xiv, 1409.0473, 2014. appeared in ICRL 2015. A. Banino, A. P. Badia, R. Köster, M. J. Chadwick, V. Zambaldi, D. Hassabis, C. Barry, M. Botvinick, D. Kumaran, and C. Blundell. MEMO: a deep network for flexible combination of episodic memories. Ar Xiv, 2001.10913, 2020. A. Barra, M. Beccaria, and A. Fachechi. A new mechanical approach to handle generalized Hopfield neural networks. Neural Networks, 106:205 222, 2018. doi: 10.1016/j.neunet.2018.07.010. H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Cham: Springer International Publishing, 2nd edition, 2017. ISBN 978-3-319-48310-8. doi: 10.1007/978-3-319-48311-5. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 7th edition, 2009. ISBN 978-0-521-83378-3. J. S. Brauchart, A. B. Reznikov, E. B. Saff, I. H. Sloan, Y. G. Wang, and R. S. Womersley. Random point sets on the sphere - hole radii, covering, and separation. Experimental Mathematics, 27(1): 62 81, 2018. doi: 10.1080/10586458.2016.1226209. L. Breiman. Random forests. Machine Learning, 45(1):5 32, 2001. doi: 10.1023/A:1010933404324. J. Bruck and V. P. Roychowdhury. On the number of spurious memories in the Hopfield model. IEEE Transactions on Information Theory, 36(2):393 397, 1990. T. Cai, J. Fan, and T. Jiang. Distributions of angles in random packing on spheres. Journal of Machine Learning Research, 14(21):1837 1864, 2013. M.-A. Carbonneau, V. Cheplygina, E. Granger, and G. Gagnon. Multiple instance learning: a survey of problem characteristics and applications. Pattern Recognition, 77:329 353, 2018. Marc-André Carbonneau, Eric Granger, Alexandre J. Raymond, and Ghyslain Gagnon. Robust multiple-instance learning ensembles using random subspace instance selection. Pattern Recognition, 58:83 99, 2016. ISSN 0031-3203. doi: https://doi.org/10.1016/j. patcog.2016.03.035. URL http://www.sciencedirect.com/science/article/ pii/S0031320316300346. M. Carreira-Perpiñán and C. K. I. Williams. An isotropic Gaussian mixture can have more modes than components. Technical Report EDI-INF-RR-0185, The University of Edinburgh, School of Informatics, 2003. A. Carta, A. Sperduti, and D. Bacciu. Encoding-based memory modules for recurrent neural networks. Ar Xiv, 2001.11771, 2020. T. Chen and C. Guestrin. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785 794. ACM, 2016. doi: 10.1145/2939672.2939785. Y. Chen, J. Bi, and J. Z. Wang. MILES: Multiple-instance learning via embedded instance selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12):1931 1947, 2006. V Cheplygina, DM Tax, and M Loog. Dissimilarity-based ensembles for multiple instance learning. IEEE transactions on neural networks and learning systems, 27(6):1379, 2016. K. Cho, B. van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN encoder decoder for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724 1734. Association for Computational Linguistics, 2014. doi: 10.3115/v1/D14-1179. K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning. ELECTRA: Pre-training text encoders as discriminators rather than generators. Ar Xiv, 2003.10555, 2020. appeared in ICLR 2020. C. Cortes and V. Vapnik. Support-vector networks. Machine learning, 20(3):273 297, 1995. A. Crisanti, D. J. Amit, and H. Gutfreund. Saturation level of the Hopfield model for neural network. Europhysics Letters (EPL), 2(4):337 341, 1986. doi: 10.1209/0295-5075/2/4/012. I. Danihelka, G. Wayne, B. Uria, N. Kalchbrenner, and A. Graves. Associative long short-term memory. In M. F. Balcan and K. Q. Weinberger (eds.), Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp. 1986 1994, New York, USA, 2016. M. Daniluk, T. Rocktäschel, J. Welbl, and S. Riedel. Frustratingly short attention spans in neural language modeling. Ar Xiv, 1702.04521, 2017. appeared in ICRL 2017. M. Dehghani, S. Gouws, O. Vinyals, J. Uszkoreit, and L. Kaiser. Universal transformers. Ar Xiv, 1807.03819, 2018. Published at ICLR 2019. M. Demircigil, J. Heusel, M. Löwe, S. Upgang, and F. Vermet. On a model of associative memory with huge storage capacity. Journal of Statistical Physics, 168(2):288 299, 2017. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. Ar Xiv, 1810.04805, 2018. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171 4186. Association for Computational Linguistics, 2019. T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1-2):31 71, 1997. R. O. Emerson, W. S. De Witt, M. Vignali, J. Gravley, J. K. Hu, E. J. Osborne, C. Desmarais, M. Klinger, C. S. Carlson, J. A. Hansen, et al. Immunosequencing identifies signatures of cytomegalovirus exposure history and HLA-mediated effects on the T cell repertoire. Nature Genetics, 49(5):659, 2017. M. Fernández-Delgado, E. Cernadas, S. Barro, and D. Amorim. Do we need hundreds of classifiers to solve real world classification problems? The Journal of Machine Learning Research, 15(1): 3133 3181, 2014. V. Folli, M. Leonetti, and G. Ruocco. On the maximum storage capacity of the Hopfield model. Frontiers in Computational Neuroscience, 10(144), 2017. doi: 10.3389/fncom.2016.00144. B. Gao and L. Pavel. On the properties of the softmax function with application in game theory and reinforcement learning. Ar Xiv, 1704.00805, 2017. D. J. H. Garling. Analysis on Polish Spaces and an Introduction to Optimal Transportation. London Mathematical Society Student Texts. Cambridge University Press, 2017. ISBN 1108421571. doi: 10.1017/9781108377362. J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning (ICML), volume 70, pp. 1263 1272. JMLR.org, 2017. A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. Ar Xiv, 1410.5401, 2014. N. Guttenberg, N. Virgo, O. Witkowski, H. Aoki, and R. Kanai. Permutation-equivariant neural networks applied to dynamics prediction. ar Xiv, 1612.04530, 2016. J. Hertz, A. Krogh, and R. G. Palmer. Introduction to the Theory of Neural Computation. Addison Wesley Longman Publishing Co., Inc., Redwood City, CA, 1991. ISBN 0201503956. S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut für Informatik, Lehrstuhl Prof. Brauer, Technische Universität München, 1991. Advisor: J. Schmidhuber. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735 1780, 1997. A. Hoorfar and M. Hassani. Inequalities on the Lambert w function and hyperpower function. Journal of Inequalities in Pure and Applied Mathematics, 9(2):1 5, 2008. J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79(8):2554 2558, 1982. J. J. Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences, 81(10):3088 3092, 1984. doi: 10.1073/pnas.81.10.3088. M. Ilse, J. M. Tomczak, and M. Welling. Attention-based deep multiple instance learning. International Conference on Machine Learning (ICML), pp. 3376 3391, 2018. M. Ilse, J. M. Tomczak, and M. Welling. Deep multiple instance learning for digital histopathology. In Handbook of Medical Image Computing and Computer Assisted Intervention, pp. 521 546. Elsevier, 2020. D. Jiang, Z. Wu, C.-Y. Hsieh, G. Chen, B. Liao, Z. Wang, C. Shen, D. Cao, J. Wu, and T. Hou. Could graph neural networks learn better molecular representation for drug discovery? a comparison study of descriptor-based and graph-based models. Journal of Cheminformatics, 2020. doi: 10.21203/rs.3.rs-81439/v1. M. Kandemir, C. Zhang, and F. A. Hamprecht. Empowering multiple instance histopathology cancer diagnosis by cell graphs. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 228 235. Springer, 2014. M. M. R. Khan, R. B. Arif, M. A. B. Siddique, and M. R. Oishe. Study and observation of the variation of accuracies of KNN, SVM, LMNN, ENN algorithms on eleven different datasets from UCI machine learning repository. In 4th International Conference on Electrical Engineering and Information & Communication Technology (i CEEi CT), pp. 124 129. IEEE, 2018. T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. Ar Xiv, 1609.02907, 2016. in International Conference On Learning Representations (ICLR) 2017. G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter. Self-normalizing neural networks. In Advances in Neural Information Processing Systems, pp. 971 980, 2017a. G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter. Self-normalizing neural networks. Ar Xiv, 1706.02515, 2017b. P. Koiran. Dynamics of discrete time, continuous state Hopfield networks. Neural Computation, 6(3): 459 468, 1994. doi: 10.1162/neco.1994.6.3.459. I. Korshunova, J. Degrave, F. Huszar, Y. Gal, A. Gretton, and J. Dambre. BRUNO: A deep recurrent model for exchangeable data. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 7190 7198. Curran Associates, Inc., 2018. D. Krotov and J. J. Hopfield. Dense associative memory for pattern recognition. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, pp. 1172 1180. Curran Associates, Inc., 2016. D. Krotov and J. J. Hopfield. Dense associative memory is robust to adversarial inputs. Neural Computation, 30(12):3151 3167, 2018. D. Krotov and J. J. Hopfield. Large associative memory problem in neurobiology and machine learning. Ar Xiv, 2008.06996, 2020. E. S. Küçüka scı and M. G. Baydo gan. Bag encoding strategies in multiple instance learning problems. Information Sciences, 467:559 578, 2018. M. Kuhn, I. Letunic, L. J. Jensen, and P. Bork. The SIDER database of drugs and side effects. Nucleic Acids Research, 44(D1):D1075 D1079, 2016. doi: 10.1093/nar/gkv1075. Y. Le Cun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521:436 444, 2015. T. Lipp and S. Boyd. Variations and extension of the convex concave procedure. Optimization and Engineering, 17(2):263 287, 2016. doi: 10.1007/s11081-015-9294-x. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. ar Xiv preprint ar Xiv:1711.05101, 2017. O. Maron and T. Lozano-Pérez. A framework for multiple-instance learning. In M. I. Jordan, M. J. Kearns, and S. A. Solla (eds.), Advances in Neural Information Processing Systems, pp. 570 576. MIT Press, 1998. I. F. Martins, A. L. Teixeira, L. Pinheiro, and A. O. Falcao. A Bayesian approach to in silico blood-brain barrier penetration modeling. Journal of Chemical Information and Modeling, 52(6): 1686 1697, 2012. doi: 10.1021/ci300124c. C. Mazza. On the storage capacity of nonlinear neural networks. Neural Networks, 10(4):593 597, 1997. doi: 10.1016/S0893-6080(97)00017-8. R. J. Mc Eliece, E. C. Posner, E. R. Rodemich, and S. S. Venkatesh. The capacity of the Hopfield associative memory. IEEE Trans. Inf. Theor., 33(4):461 482, 1987. doi: 10.1109/TIT.1987. 1057328. R. R. Meyer. Sufficient conditions for the convergence of monotonic mathematical programming algorithms. Journal of Computer and System Sciences, 12(1):108 121, 1976. doi: 10.1016/ S0022-0000(76)80021-9. F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark. NIST handbook of mathematical functions. Cambridge University Press, 1 pap/cdr edition, 2010. ISBN 9780521192255. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. De Vito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in Py Torch. In Workshop in Advances in Neural Information Processing Systems (Neur IPS), 2017. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. Py Torch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8026 8037, 2019. C. R. Qi, H. Su, M. Kaichun, and L. J. Guibas. Point Net: Deep learning on point sets for 3d classification and segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 77 85, 2017a. doi: 10.1109/CVPR.2017.16. C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Point Net++: Deep hierarchical feature learning on point sets in a metric space. In 31st International Conference on Neural Information Processing Systems, pp. 5105 5114. Curran Associates Inc., 2017b. A. Rangarajan, S. Gold, and E. Mjolsness. A novel optimizing network architecture with applications. Neural Computation, 8(5):1041 1060, 1996. doi: 10.1162/neco.1996.8.5.1041. A. Rangarajan, A. Yuille, and Eric E. Mjolsness. Convergence properties of the softassign quadratic assignment algorithm. Neural Computation, 11(6):1455 1474, 1999. doi: 10.1162/ 089976699300016313. S. Ravanbakhsh, J. Schneider, and B. Poczos. Deep learning with sets and point clouds. ar Xiv, 1611.04500, 2016. I. Schlag and J. Schmidhuber. Learning to reason with third order tensor products. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 9981 9993. Curran Associates, Inc., 2018. I. Schlag, P. Smolensky, R. Fernandez, N. Jojic, J. Schmidhuber, and J. Gao. Enhancing the transformer with explicit relational encoding for math problem solving. ar Xiv, 1910.06611, 2019. I. Schlag, K. Irie, and J. Schmidhuber. Linear transformers are secretly fast weight memory systems. ar Xiv, 2102.11174, 2021. J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. In Neural Computations, Volume: 4, Issue: 1, pp. 131 139. MIT Press, 1992. J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85 117, 2015. doi: 10.1016/j.neunet.2014.09.003. B. Schölkopf and A. J. Smola. Learning with Kernels Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, 2002. B. K. Sriperumbudur and G. R. Lanckriet. On the convergence of the concave-convex procedure. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta (eds.), Advances in Neural Information Processing Systems 22, pp. 1759 1767. Curran Associates, Inc., 2009. G. Subramanian, B. Ramsundar, V. Pande, and R. A. Denny. Computational modeling of β-Secretase 1 (BACE-1) inhibitors using ligand based approaches. Journal of Chemical Information and Modeling, 56(10):1936 1949, 2016. doi: 10.1021/acs.jcim.6b00290. S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems 28, pp. 2440 2448. Curran Associates, Inc., 2015a. S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus. End-to-end memory networks. Ar Xiv, 1503.08895, 2015b. R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 2 edition, 2018. F. Tanaka and S. F. Edwards. Analytic theory of the ground state properties of a spin glass. I. Ising spin glass. Journal of Physics F: Metal Physics, 10(12):2769 2778, 1980. doi: 10.1088/0305-4608/10/ 12/017. Y. Tay, D. Bahri, D. Metzler, D.-C. Juan, Z. Zhao, and C. Zheng. Synthesizer: Rethinking selfattention in transformer models. Ar Xiv, 2005.00743, 2020. M. Toneva and L. Wehbe. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 14954 14964. Curran Associates, Inc., 2019a. M. Toneva and L. Wehbe. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). ar Xiv, 1905.11833, 2019b. J. J. Torres, L. Pantic, and Hilbert H. J. Kappen. Storage capacity of attractor neural networks with depressing synapses. Phys. Rev. E, 66:061910, 2002. doi: 10.1103/Phys Rev E.66.061910. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 5998 6008. Curran Associates, Inc., 2017a. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. Ar Xiv, 1706.03762, 2017b. P. Veli ckovi c, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph attention networks. ar Xiv, 1710.10903, 2018. in International Conference On Learning Representations (ICLR) 2018. M. Wainberg, B. Alipanahi, and B. J. Frey. Are random forests truly the best classifiers? The Journal of Machine Learning Research, 17(1):3837 3841, 2016. G. Wainrib and J. Touboul. Topological and dynamical complexity of random neural networks. Phys. Rev. Lett., 110:118101, 2013. doi: 10.1103/Phys Rev Lett.110.118101. J. Wang. Solving the multiple-instance problem: A lazy learning approach. In Proceedings of the 17th International Conference on Machine Learning (ICML), 2000. X. Wang, Y. Yan, P. Tang, X. Bai, and W. Liu. Revisiting multiple instance neural networks. Pattern Recognition, 74:15 24, 2018. C. R. Weber, R. Akbar, A. Yermanos, M. Pavlovi c, I. Snapkov, G. K. Sandve, S. T. Reddy, and V. Greiff. immune SIM: tunable multi-feature simulation of Band T-cell receptor repertoires for immunoinformatics benchmarking. Bioinformatics, 36(11):3594 3596, 2020. doi: 10.1093/ bioinformatics/btaa158. J. Weston, S. Chopra, and A. Bordes. Memory networks. Ar Xiv, 1410.3916, 2014. M. Widrich, B. Schäfl, M. Pavlovi c, H. Ramsauer, L. Gruber, M. Holzleitner, J. Brandstetter, G. K. Sandve, V. Greiff, S. Hochreiter, and G. Klambauer. Modern Hopfield networks and attention for immune repertoire classification. Ar Xiv, 2007.13505, 2020a. M. Widrich, B. Schäfl, M. Pavlovi c, H. Ramsauer, L. Gruber, M. Holzleitner, J. Brandstetter, G. K. Sandve, V. Greiff, S. Hochreiter, and G. Klambauer. Modern Hopfield networks and attention for immune repertoire classification. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2020b. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew. Hugging Face s transformers: State-of-the-art natural language processing. Ar Xiv, 1910.03771, 2019. J. C. F. Wu. On the convergence properties of the em algorithm. Ann. Statist., 11(1):95 103, 1983. doi: 10.1214/aos/1176346060. X. Wu, X. Liu, W. Li, and Q. Wu. Improved expressivity through dendritic neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 8057 8068. Curran Associates, Inc., 2018. Z. Wu, B. Ramsundar, E. N. Feinberg, J. Gomes, C. Geniesse, A. S. Pappu, K. Leswing, and V. Pande. Molecule Net: A benchmark for molecular machine learning. ar Xiv, 1703.00564, 2017. Z. Xiong, D. Wang, X. Liu, F. Zhong, X. Wan, X. Li, Z. Li, X. Luo, K. Chen, H. Jiang, and M. Zheng. Pushing the boundaries of molecular representation for drug discovery with the graph attention mechanism. Journal of Medicinal Chemistry, 63(16):8749 8760, 2020. doi: 10.1021/acs.jmedchem.9b00959. Y. Xu, T. Fan, M. Xu, L. Zeng, and Y. Qiao. Spider CNN: Deep learning on point sets with parameterized convolutional filters. In V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss (eds.), European Conference on Computer Vision (ECCV), pp. 90 105. Springer International Publishing, 2018. A. L. Yuille and A. Rangarajan. The concave-convex procedure (CCCP). In T. G. Dietterich, S. Becker, and Z. Ghahramani (eds.), Advances in Neural Information Processing Systems 14, pp. 1033 1040. MIT Press, 2002. A. L. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 15(4):915 936, 2003. doi: 10.1162/08997660360581958. M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola. Deep sets. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 3391 3401. Curran Associates, Inc., 2017. W. I. Zangwill. Nonlinear programming: a unified approach. Prentice-Hall international series in management. Englewood Cliffs, N.J., 1969. ISBN 9780136235798. S. Zhai, W. Talbott, M. A. Bautista, C. Guestrin, and J. M. Susskind. Set distribution networks: a generative model for sets of images. ar Xiv, 2006.10705, 2020. W. Zhang and B. Zhou. Learning to update auto-associative memory in recurrent neural networks for improving sequence memorization. Ar Xiv, 1709.06493, 2017. Y. Zhu, R. Kiros, R. S. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Proceedings of the IEEE international conference on computer vision, pp. 19 27, 2015. ar Xiv 1506.06724.