# decomposable_transformer_point_processes__8c9d9471.pdf Decomposable Transformer Point Processes Aristeidis Panos University of Cambridge ap2313@cam.ac.uk The standard paradigm of modeling marked point processes is by parameterizing the intensity function using an attention-based (Transformer-style) architecture. Despite the flexibility of these methods, their inference is based on the computationally intensive thinning algorithm. In this work, we propose a framework where the advantages of the attention-based architecture are maintained and the limitation of the thinning algorithm is circumvented. The framework depends on modeling the conditional distribution of inter-event times with a mixture of log-normals satisfying a Markov property and the conditional probability mass function for the marks with a Transformer-based architecture. The proposed method attains state-of-the-art performance in predicting the next event of a sequence given its history. The experiments also reveal the efficacy of the methods that do not rely on the thinning algorithm during inference over the ones they do. Finally, we test our method on the challenging long-horizon prediction task and find that it outperforms a baseline developed specifically for tackling this task; importantly, inference requires just a fraction of time compared to the thinning-based baseline. 1 Introduction Continuous-time event sequences are commonly found in real-world scenarios and applications such as financial transactions [3], communication in a social network [34], and purchases in e-Commerce systems [16]. This abundance of data for discrete events occuring at irregular intervals has lead to an increasing interest of the community in the last decade to marked temporal point processes which are the standard way of modeling this kind of data. Historically, Hawkes processes [14] and Poisson processes [7] have been extensively applied to various domains such as finance [13], seismology [15], and astronomy [1]. Despite their elegant mathematical framework and interpretability, the strong assumptions of the models reduce their flexibility and fail to capture the complex dynamics of real-world generating processes. Advances in deep learning have allowed the incorporation of neural models like LSTMs [17] or recurrent neural networks (RNN) into temporal point processes [5, 12, 24, 26, 29, 35, 38]. As a result, these models are able to learn more complex dependencies and attain superior performance than Hawkes/Poisson processes. Recently, the introduction of the (self-) attention mechanism [36] to modeling temporal point processes [41, 42, 44] has led to new state-of-the-art methods with extra flexibility. Despite the advantages of these neural-based models, their dependence on modeling the conditional intensity function creates limitations for both training and inference [35]. Training usually requires a Monte Carlo approximation of an integral that appears in the log-likelihood. [29] proposed a method to circumvent this approximation; however, the main shortcomings remained as discussed in [35]. More importantly, inference is based on the thinning algorithm [21, 22] which is computationally intensive and sensitive to the choice of intensity function. To deal with these downsides, [35] parameterized the conditional distribution of the inter-event times by combining a log-normal mixture density network 38th Conference on Neural Information Processing Systems (Neur IPS 2024). with an RNN. The model s performance is comparable to that of the other intensity-based methods which use RNN/LSTM architecture but still inferior to the Transformer-based methods. A more recent work [30] has referred to the decomposition of the log-likelihood of a marked point process [6] to parameterize the distribution of marks given the time and history and the distribution of times given the history. This decomposition, as with [35], eliminates the need for the thinning algorithm and additional approximations, while offering a rigorous, yet flexible framework for defining different distributions for occurrence times and marks. [30] used two different parametric models for each distribution, and their results for the time prediction task, despite the simplicity of their framework, were competitive or superior to neural-based baselines. Inspired by the state-of-the-art performance of the Transformer-based architectures and the computational efficiency/flexibility of the intensity-free models, we develop a model for marked point processes that combines the advantages of these two methodologies. Our contributions are summarized below: We propose a novel model that is defined by two distributions: a distribution for the marks based on a Transformer architecture and a simple log-normal mixture model for the interevent times which satisfies a simple Markov property. Through an extensive experimental study, we show the efficiency of our model in the nextevent prediction task and the suitability of the intensity-free models for correctly predicting the next occurrence time over the methods relied on the thinning algorithm. To the best of our knowledge, we are the first to experimentally show the limitations of the thinning algorithm on the predictive ability of the neural point processes. We test our model on the more challenging long-horizon prediction task and we provide strong evidence that we can achieve better results in a fraction of time compared to models that have been specifically designed to solve this task and, uncoincidentally, depend on the thinning algorithm. 2 Background A marked temporal point process (MTPP), observed in the interval (0, T), is a stochastic process whose realizations are sequences of discrete events occurring at times 0 < t1 < . . . < t N < T with corresponding event types (or marks) k1, . . . , k N, where ki {1, . . . , K}. The entire sequence is denoted by HT = {(t1, k1), . . . , (t N, k N)}. The process is fully specified by the conditional intensity function (CIF) of the event of type k at time t conditioned on the event history Hti = {(tj, kj) | tj < ti}, λ k(t) := λk(t | Hti) 0, t > ti; we use the asterisk to denote the dependence on Hti. The CIF is used to compute the infinitesimal probability of event k occurring at time t, i.e. λ k(t)dt = P (ti+1 [t, t + dt], ki+1 = k | ti+1 / (ti, t), Hti). The log-likelihood of such an autoregressive multivariate point process is given by [14, 22] i=1 λ ki(ti) 0 λ k(t) dt. (1) Modeling the intensity function by a flexible model and then learning its parameters by maximizing Eq. (1) has been the standard approach of many works [5, 12, 24, 26, 29, 35, 38, 41, 42, 44]. An equivalent way of deriving the log-likelihood in (1) without the use of λ k(t) is by following the decomposition of a multivariate distribution function in [6] (expression 2), expressed as i=1 {log p (ki | ti) + log f (ti)} + log (1 F(T | Ht N )) , (2) where p (k | ti) := p(k | ti, Hti)1 and f (t) := f(t | Hti) are the conditional probability mass function (CPMF) of the event types and the conditional probability density function (CPDF) for the occurrence times, respectively. F(t | Hti) = R t ti f (t)dt , t > ti is the cumulative distribution function of f (t). The last term in (2) is the logarithm of the survival function that expresses 1The notation of is slightly different compared to λ k but the definition remains consistent. the probability that no event occurs in the interval (t N, T). The relation between λ k(t) and the density/PMF is given by λ k(t) = f (t)p (k|t) 1 F (t|Ht) ; see Section 2.4 in [33]. We can represent the temporal part of the process in terms of the inter-event times τi := ti ti 1 R+, t0 = 0; the two representations are isomorphic and the relation between the conditional PDF of the inter-event time τi until the next event and the conditional intensity function is given by g (τi) := g (τi | Hti) = PK k=1 λ k(ti 1 + τi) exp PK k=1 R τi 0 λ k(ti 1 + x)dx = f (ti). 3 Decomposable Transformer Point Processes To develop our proposed framework Decomposable Transformer Point Process (DTPP), we adopt the decomposition in (2) and model g (τ) and p (k | t), separately. Despite the advantages of modeling the intensity function and the arguments in favor of this [9], we believe that modeling the probability density/mass function offers not only the same benefits as modeling the intensity function as discussed in [35], but, more importantly, it allows us not to depend on the thinning algorithm during inference. The technical details of each model are described in the next two sections. 3.1 Distribution of Marks The conditional distribution of the event types is parameterized by a continuous-time Transformer architecture as the one described in [41]. More specifically, for any pair of events (t, k), we evaluate an embedding hk(t) RD based on the history Ht. Assuming an L-layer architecture, hk(t) is given by the concatenation of the embedding of each individual layer, i.e. hk(t) = [h(0) k (t); h(1) k (t); . . . ; h(L) k (t)]. The embedding of the base layer h(0) k (t) is independent of time and it is learned by a simple weight vector for each mark, i.e. h(0) k (t) := h(0) k RD(0). The embedding of layer ℓ {1, . . . , L} for (t, k) is defined as h(ℓ) k (t) := h(ℓ 1) k (t) + tanh v(ℓ) ki (ti) α(ℓ) ki (ti; t, k) 1 + C where C > 0 is the normalization constant given by C = P (ti,ki) Ht α(ℓ) ki (ti, t, k) and the unnormalized attention weight is α(ℓ) ki (ti; t, k) = exp 1 D k(ℓ) ki (ti) q(ℓ) k (t) > 0. (4) The operation of the non-linear activation function tanh is element-wise and D = PL ℓ=0 D(ℓ). The query, key, and value vectors q(ℓ) k (t), k(ℓ) k (t), and v(ℓ) k (t), respectively, can be computed by using the embedding of the previous layer and the corresponding weight matrices Q(ℓ), K(ℓ) RD (D+D(ℓ 1)), V (ℓ) RD(ℓ) (D+D(ℓ 1)) as follows, q(ℓ) k (t) = Q(ℓ)Xℓ t , k(ℓ) k (t) = K(ℓ)Xℓ t , v(ℓ) k (t) = V (ℓ)Xℓ t , (5) where Xℓ t = h z(t); h(ℓ 1) k (t) i RD+D(ℓ 1). By z(t) RD, we denote a temporal embedding of time defined as cos t/10 4(d 1) D , if d is odd, sin t/10 4d D , if d is odd, (6) where d = 0, . . . , D 1. This encoding is the same as in [44] with small differences than the one used in [41] where we found empirically the former to work slightly better than the latter. For a more detailed discussion regarding the architecture of the model and how it compares to previous Transformer-based methods, see Appendix A in [41]. Finally, we note that for extra model flexibility, multi-head self-attention can be easily obtained by the three equations in (5). Having computed the top-layer embeddings hk(t) for all k = 1, . . . , K, we model the conditional PMF p (k | t) as p (k | t) = exp w k hk(t) PK l=1 exp w l hl(t) , (7) where wk are the learnable classifier weights. As is typical for these architectures, to avoid any data leakage from future events, we mask all future events (ti, ki) where t < ti and only use previous events for computing these embeddings. 3.2 Distribution of Inter-Event Times For the modeling of the inter-event times, since they always take positive values, we choose a mixture of log-normal distributions whose parameters depend on the value of the previously seen mark. Specifically, given that the previous occurred mark is k, the PDF2 of the next inter-event time τ is defined as g (τ) = g(τ | k) = m=1 w(k) m 1 log τ µ(k) m s(k) m where {w(k) m }M m=1 M are the mixture weights, {µ(k) m }M m=1 RM are the mixture means, and {s(k) m }M m=1 RM + are the standard deviations, for any k = 1, . . . , K. The log-normal mixture has several desirable features that justifies our choice: (i) it efficiently approximates distributions in low dimensions such as 1-d distributions of inter-event times [23, 35] while satisfying a universal approximation property that provides theoretical guarantees regarding its approximation ability [8], (ii) closed-form moments are available and can be used for predicting the next time; for instance, the mean of the distribution is given as the weighted average of each of the log-normal means, i.e. E(k) g [τ] = m=1 w(k) m exp µ(k) m + (s(k) m )2 (iii) learning the small number of parameters {w(k) m , µ(k) m , s(k) m }M m=1 can be done in a fraction of time using fast off-the-shelf implementations based on the EM algorithm [10]. Finally, note that the dependence of the model only on the most recent mark implies a Markov property since we do not need the entire history H 1 events (long-horizon prediction). For the next-event-prediction, the predicted time ˆt given the history Hti is computed by using the mean of the appropriate mixture of log-normals while the corresponding predicted type of this event ˆk is evaluated based on Hti and the true time ti+1, i.e. ˆt = ti + E(ki) g [τ], ˆk = argmax k p (k | ti+1). (12) The above procedure is based on the minimum Bayes risk (MBR) principle [24] which aims to predict the time and type that minimizes the expected loss. This is an average L2 loss in the case of time prediction (deriving a root mean squared error) and an average 0-1 loss for the type prediction (deriving an error rate). For the long-horizon prediction task [11, 40], we need to predict a sequence of events where, unlike the next-event prediction task, we do not have access to the true time when we predict the next event type. This could potentially lead to a cascading error effect due to the autoregressive nature of the models designed for the less challenging task of next-event prediction. This is because after an error is made in the sequence of the predictions, it cannot be corrected, and thus the error accumulates and affects all subsequent predictions. We argue that this pathology can be alleviated by using a model of times that provides accurate and robust predictions given the history. This assumption is verified experimentally in Section 5.2. To generate a predicted sequence, we require the trained models g() and p() to sequentially predict events as in (12). Since we have no access to the true time ti+1, we use as a proxy the prediction ˆt to predict ˆk in turn. After the prediction of the new event, we append it to the history and then we repeat the same step given the updated history until we generate a sequence of P events. The exact procedure is described in Algorithm 1 in the Appendix. The main advantage of Algorithm 1 over other methods [40] that are based on the thinning algorithm is its computational efficiency. The algorithm is fully parallelizable, and it can produce single steps in parallel for a batch of event sequences. This is not possible for thinning-based methods that require one to consider single sequences each time [40]. Consequently, our method is able to generate sequences orders of magnitude faster, which is verified by our experiments. Unlike other competitors [40] that are based on the thinning algorithm and therefore require random sampling, our algorithm is fully deterministic; for comparison the thinning algorithm is described in Algorithm 2 of the Appendix. 4 Related Work The decomposition in (2) has been used in the past to provide both expressive and interpretable models [27, 30]. For instance, [30] model the mark distribution with a parametric model, inspired by the exponential intensity function of a Hawkes process and the time distribution with a single log-normal; however, they use the mode instead of the mean of the distribution for predicting the time of the next event. They learn the parameters of their models separately, as we describe in (10) and (11), but they use Variational Inference [4] to learn the parameters of p . They attain competitive results in terms of next-time prediction, but the model lacks the flexibility of a Transformer-based architecture, as our experiments show. [35] is another work that takes advantage of the mixture of log-normal distributions to model the distribution of the inter-event times. The model is based on an RNN architecture that produces a fixed-dimensional embedding of the event history, which is used to generate the parameters of the mixture model, and the same embedding is employed to define the CPMF of the marks. In our case, we use a Transformer architecture to obtain this history embedding, which is utilized by the CPMF, exclusively. Finally, the proposed model in [35] assumes that the marks are conditionally independent of the time given the history, which is not the case for our framework, as is evident in (2). Avg Log-likelihood (a) Mimic-II Avg Log-likelihood Avg Log-likelihood Avg Log-likelihood Avg Log-likelihood (e) Stack Overflow Figure 1: Goodness-of-fit evaluation over the five real-world datasets. We compare our DTPP model against five strong baselines. Results (larger is better) are accompanied by 95% bootstrap confidence intervals. Finally, the CPMF of the marks for our DTPP model shares the same architecture as the Attentive Neural Hawkes Process (A-NHP) [41]. Nevertheless, they use it to model the CIF while in our case we model p . 5 Experiments We considered two different tasks to assess the predictive performance of our proposed method: Goodness-of-fit/next-event prediction and long-horizon prediction. We compared our method DTPP to several strong baselines over five real-world datasets and three synthetic ones. Description and summary statistics for all datasets used in this section are given in Appendix A.1. For the competing methods, we used their published implementations; more details are given in A.3. Experimental details not available in this section can be found in Appendix A. Our framework was implemented with Py Torch [31] and scikit-learn [32]; the code is available at https://github.com/ares Panos/ dtpp. 5.1 Goodness-of-Fit / Next-Event Prediction We evaluated our DTPP model to determine how well it generalizes and predicts the next event given the history on the held-out dataset. For comparison, we used five state-of-the-art baselines where the three of them model the CIF using Transformers, while the other two model the CPDF of inter-event times and the CPMF of marks (see Section 4). The CIF-based baselines are the Transformer Hawkes Process (THP) [44], the Self-Attentive Hawkes Process (SAHP) [42], and the Attentive Neural Hawkes Process (A-NHP) [41]. The CPDF-based ones are the Intensity-Free Temporal Point Process (IFTPP) [35], and the VI-Decoupled Point Process (VI-DPP) [30]. We fit the above six models on a diverse collection of five popular real-world datasets, each with varied characteristics: MIMIC-II [19], Amazon [28], Taxi [37], Taobao [43], and Stack Overlfow V1 [20, 41]. Training details are given in Appendix A.2. Goodness-of-Fit. Figure 1 shows the average log-likelihood for each model on the held-out data of the five real-world datasets. Our DTPP model consistently outperforms the simple parametric Table 1: Performance comparison between our model DTPP and various baselines in terms of nextevent prediction. The root mean squared error (RMSE) measures the error of the predicted time of the next event, while the error rate (ERROR-%) evaluates the error of the predicted mark given the true time. The results (lower is better) are accompanied by 95% bootstrap confidence intervals. , , denote the CIF-based methods, the CPDF-based methods that use a single model, and the ones using a seperate model, respectively. AMAZON TAXI TAOBAO STACKOVERFLOW-V1 METHODS RMSE ERROR RMSE ERROR RMSE ERROR RMSE ERROR THP 0.62 0.03 65.06 1.04 0.37 0.02 8.55 0.65 0.13 0.02 41.43 0.78 1.32 0.03 53.86 0.76 SAHP 0.58 0.01 62.58 0.32 0.28 0.04 8.37 0.43 2.26 0.59 45.63 0.58 1.93 0.04 53.00 0.32 A-NHP 0.42 0.01 65.94 0.31 0.29 0.02 7.67 0.44 1.44 0.53 43.96 0.55 1.18 0.01 52.17 0.32 IFTPP 0.41 0.05 64.08 0.34 0.39 0.09 8.17 0.46 0.33 0.00 41.92 0.54 1.91 0.10 53.68 0.30 VI-DPP 0.38 0.00 65.49 0.34 0.11 0.02 9.49 0.42 0.07 0.00 41.89 0.57 1.68 0.04 55.06 0.32 DTPP 0.12 0.00 59.06 0.35 0.08 0.01 7.12 0.40 0.05 0.00 40.12 0.59 1.07 0.03 50.41 0.31 VI-DPP, indicating the flexibility of using the self-attention mechanism to model the CPDF. Except Mimic-II, DTPP achieves the highest or the second highest log-likelihood across the remaining datasets. Therefore, the separate parameterization of p and g does not hurt performance compared to the models with a common set of learnable parameters. Finally, notice that DTPP outperforms all the CPDF-based methods on average, while the two CPDF-based methods that employ deep learning architectures, i.e., DTPP and IFTPP, exhibit better performance than the CIF-based baselines. A plausible explanation is that the log-likelihood computation for the CIF-based baselines requires Monte Carlo integration, which could cause approximation errors; for the CPDF-based methods, this computation is exact. A-NHP is the clear winner among the CIF-based methods, as also shown in [41]. Next-Event Prediction. We evaluate the predictive capacity of all models by predicting each event (ti, ki) given its history Hti, i = 2, . . . , N on held-out data. Event time prediction is measured by root mean squared Error (RMSE) and event type prediction by error rate; Table 1 summarizes the results. DTPP outperforms all the baselines in both tasks. The wider performance gaps in RMSE between our model and the other baselines justify our choice of a inter-event distribution satisfying a Markov property; this result also implies that we do not need long event histories to capture the dynamics of these datasets. We also compare the average performance between CIF-based and CPDF-based (excluding VI-DPP) methods. We see that for the CIF-based baselines the average RMSE is 0.58 and the average error rate is 35.39% while for the CPDF-based ones we have 0.95 and 37.0%, respectively. These results support our argument that the thinning algorithm tends to harm the time prediction accuracy; they also highlight the efficiency of using a separate model for the inter-event times. Additional results on Mimic-II can be found in Appendix B.1. Synthetic datasets. To extra investigate the capabilities of our model in a more controlled manner, we created a dataset by generating sequences from a randomly initialized SAHP model. Although, each event has strong dependence on its history, Figure 2 shows that our model approximates the true log-likelihood as well as A-NHP. Moreover, DTPP s mixture model is more accurate than the thinning-based A-NHP in time prediction. Moreover, we found that the only case that DTPP was significantly outperformed by A-NHP was on a synthetic dataset generated by a 1-d Hawkes process. Since no event types are present we only use a single mixture of log-normals which apparently is the wrong model for this data. The results are illustrated in Figure 4 of the Appendix. 5.2 Long-Horizon Prediction To test the performance of our model for this task, we followed the experimental setup of [40]. From the same work, we used the proposed HYPRO, which is the state-of-the-art method for the long-horizon prediction task, to compare with DTPP. HYPRO is a globally normalized model that aims to address cascading errors that occur in auto-regressive and locally normalized models, such as the models in Section 5.1. HYPRO and DTPP are based on the same Transformer architecture of A-NHP DTPP 1.15 Avg Log-likelihood True Log-likelihood A-NHP DTPP 0.010 Figure 2: Performance comparison between DTPP and A-NHP over the SAHP-Synthetic dataset. A-NHP so the main difference is that HYPRO is a CIF-based method, and thus, it requires the thinning algorithm to sample sequences. For our experiments, we use the distance-based regularization variant of HYPRO with a Multi-NCE loss as this method attains the best results in [40]. As DTPP and HYPRO share the same Transformer architecture, so we used the exact same hyperparameters for fair comparison. Note that even in this case, HYPRO has more than double number of parameters compared to DTPP since HYPRO requires an extra Transformer to model the energy function used for global normalization. More details on HYPRO training and hyperparameters can be found in the Appendix A. HYPRO DTPP 0.8 HYPRO DTPP 14 HYPRO DTPP 2.3 HYPRO DTPP 25 HYPRO DTPP 36 (c) Stack Overflow-V2 Figure 3: Performance comparison over the three real-world datasets measured by RMSE and average OTD (lower is better). The reported results for HYPRO are based on 16 weighted samples, i.e. M = 16 for Algorithm 2 in [40]. We used three of the previous real-world datasets for evaluation because of their long sequences. These are Taxi, Taobao, and Stack Overflow-V2 [20, 40]. For each dataset, our goal is to predict the last 20 events in a sequence, denoted by HP , given the history; that is, P = 20 in Algorithm 1. As is typical for the long-horizon prediction, the standard scores used for evaluating the model s performance are the The optimal transport distance (OTD) [25] and the long-horizon RMSE (RMSE ) [40]. In Figure 3, we see that our DTPP method outperforms HYPRO across all datasets in terms of average OTD and RMSE. HYPRO achieves a lower RMSE score only in Stack Overflow. These results provide corroborating evidence on our argument that the thinning algorithm might negatively affect the performance of a neural point process even in the case of globally normalized models as HYPRO. It is also evident that a locally normalized CPDF-based model such as DTPP is much more robust against the cascading error which CIF-based methods are vulnerable [41]. We believe that this robustness stems from the accurate predictions of the log-normal mixture model. Apart from the predictive performance, we investigated the time required for the two methods to generate all the predicted sequences of the held-out dataset. Since HYPRO s inference time is heavily relied on the thinning algorithm and a hyperparameter that indicates the number of proposal sequences (denoted as M in [40]), we conducted an ablation study for a varied number of proposals to investigate the inference time and performance of HYPRO against DTPP. For HYPRO s inference time, apart from the prediction time, we included the time required to generate the noise sequences so the energy function can be trained on. The inclusion of this time is justified by the importance the energy function has as a component of the framework, and it can be seen as a necessary pre-inference Table 2: Performance comparison between our model DTPP and HYPRO for the long-horizon prediction task. For HYPRO, we use {2, 4, 8, 16, 32} weighted proposals (Algorithm 2 in [40]). We report the average optimal transport distance (avg OTD) and the time (in minutes) required for predicting all the long-horizon sequences of the held-out dataset (lower is better). Params denotes the number ( 103) of trainable parameters of each method. We include error bars based on five runs. TAXI TAOBAO STACKOVERFLOW-V2 METHODS PARAMS AVG OTD TIME AVG OTD TIME AVG OTD TIME 20.35 0.24 44.81 0.01 39.73 0.37 43.53 0.03 39.84 0.12 46.32 0.01 HYPRO-4 19.86 0.18 47.31 0.04 38.93 0.22 46.57 0.08 39.57 0.17 48.84 0.06 HYPRO-8 19.25 0.30 52.61 0.20 37.30 0.48 53.10 0.17 39.37 0.30 54.14 0.22 HYPRO-16 18.90 0.34 62.36 0.27 37.08 0.39 65.63 0.44 38.99 0.16 64.82 0.34 HYPRO-32 18.81 0.16 81.30 0.23 36.96 0.19 89.11 0.71 38.84 0.20 83.05 0.45 DTPP 400 15.00 0.30 0.01 0.00 28.83 0.26 0.17 0.01 36.75 0.40 0.03 0.00 SPEEDUP 8, 130 524.2 2768.3 step. However, for completeness, we compute only the prediction time of HYPRO and report it in Table 6 of the Appendix. The results are presented in Table 2 where we measure the performance using the average OTD; a similar table for RMSE is in Appendix B.3. We see that our parallelizable framework takes advantage of modern GPU hardware and performs inference in a few seconds. Instead, the thinning algorithm constitutes HYPRO extremely slow and impractical for inference on large datasets. In some cases like the Taxi dataset, HYPRO needs 8, 130 more time than DTPP to perform inference. Moreover, DTPP attains better performance across all datasets even for a larger number of proposals in HYPRO. These results verify our assumption about the robustness of the mixture model to predict accurately the next time; they also highlight the inaccurate predictions and computational burden of the thinning algorithm. 6 Discussion We have presented DTPP, a Transformer-based probabilistic model for continuous-time event sequences. The model has been derived using the decomposability of the likelihood of a MTPP in terms of its CPDF and CPMF. We have used a mixture of log-normals and a Transformer architecture to model CPDF and CPMF, respectively. Our model satisfies some desirable properties compared to previous works that tried to model the CIF such as closed-form computation of the log-likelihood and inference without resorting to the thinning algorithm. Extensive experiments on the standard task of next-event prediction showed that our method outperformed all state-of-the-art autoregressive models, The results also reveal a more robust performance of the methods that do not require the thinning algorithm to generate event sequences over those they do. Finally, we have tested our model on the challenging task of long-horizon prediction of event sequences. Although our model has not been designed for this task, it outperformed the state-of-the-art baseline HYPRO which is also based on the thinning algorithm. This performance for DTPP was achieved in orders of magnitude faster than HYPRO. Limitations and future work. The main limitation of the model stems from the modeling of p using a deep learning architecture which is usually data-hungry and thus requires large amount of data to learn the model s parameters. For this reason, the model might be unsuitable for data-scarce regimes since it could be prone to overfit. Regarding future work, the limitations of the thinning algorithm revealed by the experiments raise many interesting questions on how can we improve this pathology for the CIF-based methods so they match the performance of the CPDF-based ones since their representations are equivalent. Another interesting research direction would be the development of a globally normalized model similar to HYPRO for CPDF-based models. Acknowledgements We would like to thank Petros Dellaportas and Lina Gerontogianni for helpful discussions. This work was funded by Toyota Motor Europe. [1] Gutti Jogesh Babu and Eric D Feigelson. Spatial point processes in astronomy. Journal of Statistical Planning and Inference, 50(3):311 326, 1996. [2] E. Bacry, M. Bompaire, S. Gaïffas, and S. Poulsen. tick: a Python library for statistical learning, with a particular emphasis on time-dependent modeling. Ar Xiv e-prints, July 2017. [3] Emmanuel Bacry, Adrian Iuga, Matthieu Lasnier, and Charles-Albert Lehalle. Market impacts and the life cycle of investors orders. Market Microstructure and Liquidity, 1(02):1550009, 2015. [4] David M Blei, Alp Kucukelbir, and Jon D Mc Auliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859 877, 2017. [5] Alex Boyd, Robert Bamler, Stephan Mandt, and Padhraic Smyth. User-dependent neural sequence models for continuous-time event data. Advances in Neural Information Processing Systems, 33:21488 21499, 2020. [6] David R Cox. Partial likelihood. Biometrika, 62(2):269 276, 1975. [7] Daryl J Daley and David Vere-Jones. Basic properties of the Poisson process. An introduction to the theory of point processes: Volume I: Elementary theory and methods, pages 19 40, 2003. [8] Anirban Das Gupta. Asymptotic theory of statistics and probability, volume 180. Springer, 2008. [9] Abir De, Utkarsh Upadhyay, and Manuel Gomez-Rodriguez. Temporal point processes. Technical report, Technical report, Saarland University, 2019. [10] Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (methodological), 39(1):1 22, 1977. [11] Prathamesh Deshpande, Kamlesh Marathe, Abir De, and Sunita Sarawagi. Long horizon forecasting with temporal point processes. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 571 579, 2021. [12] Nan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. Recurrent marked temporal point processes: Embedding event history to vector. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1555 1564, 2016. [13] Joel Hasbrouck. Measuring the information content of stock trades. The Journal of Finance, 46(1):179 207, 1991. [14] Alan G Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83 90, 1971. [15] Alan G Hawkes and David Oakes. A cluster process representation of a self-exciting process. Journal of Applied Probability, pages 493 503, 1974. [16] Sergio Hernandez, Pedro Alvarez, Javier Fabra, and Joaquin Ezpeleta. Analysis of users behavior in structured e-commerce websites. IEEE Access, 5:11941 11958, 2017. [17] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735 1780, 1997. [18] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980, 2014. [19] Joon Lee, Daniel J Scott, Mauricio Villarroel, Gari D Clifford, Mohammed Saeed, and Roger G Mark. Open-access mimic-ii database for intensive care research. In 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 8315 8318. IEEE, 2011. [20] Jure Leskovec and Andrej Krevl. Snap datasets: Stanford large network dataset collection, 2014. [21] PA W Lewis and Gerald S Shedler. Simulation of nonhomogeneous Poisson processes by thinning. Naval research logistics quarterly, 26(3):403 413, 1979. [22] Thomas Josef Liniger. Multivariate hawkes processes. Ph D thesis, ETH Zurich, 2009. [23] Geoffrey J Mc Lachlan, Sharon X Lee, and Suren I Rathnayake. Finite mixture models. Annual review of statistics and its application, 6:355 378, 2019. [24] Hongyuan Mei and Jason M Eisner. The neural Hawkes process: A neurally self-modulating multivariate point process. In Advances in Neural Information Processing Systems, pages 6754 6764, 2017. [25] Hongyuan Mei, Guanghui Qin, and Jason Eisner. Imputing missing events in continuous-time event streams. In International Conference on Machine Learning, pages 4475 4485. PMLR, 2019. [26] Hongyuan Mei, Guanghui Qin, Minjie Xu, and Jason Eisner. Neural datalog through time: Informed temporal modeling via logical specification. In International Conference on Machine Learning, pages 6808 6819. PMLR, 2020. [27] Santhosh Narayanan, Ioannis Kosmidis, and Petros Dellaportas. Flexible marked spatiotemporal point processes with applications to event sequences from association football. Journal of the Royal Statistical Society Series C: Applied Statistics, page qlad085, 2023. [28] Jianmo Ni, Jiacheng Li, and Julian Mc Auley. Justifying recommendations using distantlylabeled reviews and fine-grained aspects. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 188 197, 2019. [29] Takahiro Omi, Naonori Ueda, and Kazuyuki Aihara. Fully neural network based model for general temporal point processes. ar Xiv preprint ar Xiv:1905.09690, 2019. [30] Aristeidis Panos, Ioannis Kosmidis, and Petros Dellaportas. Scalable marked point processes for exchangeable and non-exchangeable event sequences. In International Conference on Artificial Intelligence and Statistics, pages 236 252. PMLR, 2023. [31] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32:8026 8037, 2019. [32] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825 2830, 2011. [33] Jakob Gulddahl Rasmussen. Lecture notes: Temporal point processes and the conditional intensity function. ar Xiv preprint ar Xiv:1806.00221, 2018. [34] Diego Rybski, Sergey V Buldyrev, Shlomo Havlin, Fredrik Liljeros, and Hernán A Makse. Communication activity in a social network: relation between long-term correlations and inter-event clustering. Scientific reports, 2(1):560, 2012. [35] Oleksandr Shchur, Marin Biloš, and Stephan Günnemann. Intensity-free learning of temporal point processes. In International Conference on Learning Representations, 2019. [36] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. [37] Chris Whong. Foiling nyc s taxi trip data. https://chriswhong.com/open-data/foil_ nyc_taxi/, 2014. [38] Shuai Xiao, Junchi Yan, Xiaokang Yang, Hongyuan Zha, and Stephen Chu. Modeling the intensity function of point process via recurrent neural networks. In Proceedings of the AAAI conference on artificial intelligence, 2017. [39] Siqiao Xue, Xiaoming Shi, Zhixuan Chu, Yan Wang, Fan Zhou, Hongyan Hao, Caigao Jiang, Chen Pan, Yi Xu, James Y Zhang, et al. Easytpp: Towards open benchmarking the temporal point processes. ar Xiv preprint ar Xiv:2307.08097, 2023. [40] Siqiao Xue, Xiaoming Shi, James Zhang, and Hongyuan Mei. Hypro: A hybridly normalized probabilistic model for long-horizon prediction of event sequences. Advances in Neural Information Processing Systems, 35:34641 34650, 2022. [41] Chenghao Yang, Hongyuan Mei, and Jason Eisner. Transformer embeddings of irregularly spaced events and their participants. In International Conference on Learning Representations, 2022. [42] Qiang Zhang, Aldo Lipani, Omer Kirnap, and Emine Yilmaz. Self-attentive Hawkes process. In International Conference on Machine Learning, pages 11183 11193. PMLR, 2020. [43] Han Zhu, Xiang Li, Pengye Zhang, Guozheng Li, Jie He, Han Li, and Kun Gai. Learning tree-based deep model for recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1079 1088, 2018. [44] Simiao Zuo, Haoming Jiang, Zichong Li, Tuo Zhao, and Hongyuan Zha. Transformer hawkes process. In International Conference on Machine Learning, pages 11692 11702. PMLR, 2020. A Experimental details A.1 Dataset Details Summary statistics and characteristics of the datasets used are given in Table 3. A more detailed description is given below: Hawkes1-Synthetic. This dataset contains synthetic event sequences from a univariate Hawkes process sampled using Tick [2] whose conditional intensity function is defined by λ (t) = µ + P ti