# the_softcumulative_constraint_with_quadratic_penalty__7cb01795.pdf The Soft Cumulative Constraint with Quadratic Penalty Yanick Ouellet, Claude-Guy Quimper Universit e Laval, Qu ebec, Canada yanick.ouellet.2@ulaval.ca, claude-guy.quimper@ift.ulaval.ca The CUMULATIVE constraint greatly contributes to the success of constraint programming at solving scheduling problems. SOFTCUMULATIVE, a version of the CUMULATIVE constraint where overloading the resource incurs a penalty is, however, less studied. We introduce a checker and a filtering algorithm for SOFTCUMULATIVE, which are inspired by the energetic reasoning rule for the CUMULATIVE. Both algorithms can be used with a classic linear penalty function, but also with a quadratic penalty function, where the penalty of overloading the resource increases quadratically with the amount of the overload. We show that these algorithms are more general than existing algorithms and outperform a decomposition of SOFTCUMULATIVE in practice. 1 Introduction Scheduling problems where tasks share a finite amount of resources are everywhere in the industry. For instance, a university might want to schedule courses in classrooms or a factory might need to plan its production to minimize the peak in the power usage of its machines. The constraint programming community invested significant efforts in developing the CUMULATIVE constraint to help solve problems where the capacity of a resource can never be overloaded. However, a less studied but as important problem is to allow the resource to be overloaded in exchange of a penalty. For instance, a pharmaceutical company could ask its workers to work overtime to ensure that enough doses of a COVID vaccine are produced before a deadline. We present a checker and a filtering algorithm for the SOFTCUMULATIVE constraint, a generalization of the well known CUMULATIVE that allows resource overloads. Our algorithms are based on the energetic reasoning rule used by CUMULATIVE. The algorithms work for both linear and quadratic penalty functions. To the best of our knowledge, a quadratic penalty function cannot currently be modelled by using an existing global constraint. Section 2 presents a background from the literature. We introduce our version of SOFTCUMULATIVE (Section 3), present our checker (Section 4) and filtering (Section 5) algorithms. We explain how to use our algorithms with lazy Copyright c 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. clause generation solvers in Section 6. Section 7 shows how relevant our algorithms are in practice before concluding. 2 Background Scheduling Consider a set I of n tasks. Each task i I has to be executed between its earliest starting time esti and its latest completion time lcti and has for processing time pi. During its execution, the task requires hi units of a resource at each time point. Let a task i be represented by the tuple esti, lcti, pi, hi . From these parameters, it is possible to compute the earliest completion time ecti = esti +pi and the latest starting time lsti = lcti pi of a task i. The energy ei = pi hi represents the total amount of resource consumed during the execution of the task. The earliest starting time estΩ= mini Ω(esti) of the set of tasks Ω I is the earliest time point at which any task in Ωcan start. Similarly, the latest completion time lctΩ= maxi Ω(lcti) of Ωis the latest time point at which any task in Ωcan end. A task i has a compulsory part in the interval [lsti, ecti) if lsti < ecti. The task must necessarily execute during its compulsory part since it cannot start later than its latest starting time and cannot end before its earliest completion time. Using constraint programming, one can use the CU- MULATIVE( S, p, h, C) constraint (Aggoun and Beldiceanu 1993) to enforce that, at any time point, the resource consumption of the tasks in execution is less than or equal to the capacity C of a resource. The constraint uses one starting time variable Si [esti, lsti] for each task i in the problem, as well as parameters for their processing times, heights, and the capacity of the resource. Deciding whether the CUMULATIVE constraint can be satisfied is NP-Complete (Aggoun and Beldiceanu 1993). Filtering algorithms for the constraint can neither enforce domain or bounds consistency in polynomial time. Instead, it is necessary to rely on detection and filtering rules that work on a relaxation of the constraint. Many such rules have been introduced over the years. Rules relevant to this paper are presented below. Some rules have faster algorithms to apply them while others are slower but produce stronger inconsistency detection and filtering. The Time-Tabling rule (1) (Beldiceanu and Carlsson 2002) uses a reasoning based on the compulsory parts of the The Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22) tasks. At any time point, if the sum of the height of the compulsory parts is greater than the capacity of the resource, the CUMULATIVE constraint cannot be satisfied. This checker rule is sufficient to enforce CUMULATIVE. We refer the reader to (Beldiceanu and Carlsson 2002) for the filtering rule based on the Time Tabling. i I : lsti t C = fail (1) The Edge Finding rule (2) (Mercier and Van Hentenryck 2008) checks for precedences between a set of tasks Ωand a task i. If the combined energy eΩ {i} of the tasks in Ω and i is greater than the energy available in the interval [estΩ {i}, lctΩ), then i must end after all the tasks in Ωhave ended. Indeed, i is the only task that can execute outside of [estΩ {i}, lctΩ) and prevent an overflow in this interval. i I, Ω I \ {i} eΩ {i} > C (lctΩ estΩ {i}) = i ends after Ω(2) Of particular interest for this paper is the energetic reasoning rule (Lopez and Esquirol 1996). It is one of the strongest rules, but also one of the slowest to apply. The energetic reasoning is based on the notion of left and right shift in an interval. Let LS(i, l, u) = hi (min(u, ecti) max(l, esti)) be the left-shift of task i in the interval [l, u). It represents the amount of energy that the task consumes in the interval if it is scheduled at its earliest. The right-shift RS(i, l, u) = hi (min(u, lcti) max(l, lsti)) of task i in interval [l, u) is symmetric and represents the amount of energy that task i consumes in the interval if it is scheduled at its latest. The minimum intersection MI(i, l, u) = min(LS(i, l, u), RS(i, l, u)) is the minimum between the left-shift and the right-shift. Regardless of when a task is scheduled, it always consumes at least MI(i, l, u) units of energy in a given interval [l, u), as shown on Figure 1. LS(1, 2, 9) = 4 RS(1, 2, 9) = 6 MI(1, 2, 9) = min(4, 6) = 4 0 1 2 3 4 5 6 7 8 9 10 11 12 Figure 1: Example of the minimum intersection of task with est1 = 0, lct1 = 10, p1 = 4 and h1 = 2. The left gray area represents the left-shift of 4 while the right gray area represents the right shift of 6. The minimum intersection is the minimum between the left and right shift, which is 4. Regardless of where the task is scheduled, it always consume at least 4 units of energy in that interval. Let MI(Ω, l, u) = P i ΩMI(i, l, u) be the sum of the minimum intersection of all the tasks in Ω. The satisfiability rule for the energetic reasoning (3) states that if there exists an interval for which this sum is greater than the energy available in that interval on the resource, then the constraint cannot be satisfied. l, u, MI(I, l, u) > C (u l) = fail (3) Baptiste and al. (2001) showed that testing the rule on O(n2) intervals, called intervals of interest, is equivalent to testing the rule on all possible intervals. There are three types of intervals of interest (4). While there are O(n2) intervals of interest, there are also O(n2) distinct lower bounds and O(n2) distinct upper bounds. (Baptiste, Le Pape, and Nuijten 2001) proposed an O(n2) algorithm to apply the rule using the intervals of interest and (Ouellet and Quimper 2018) improved it to O(n log2 n). Intervals of Interest Te ={[l, u) | l O1, u O2} {[l, u) | l O1, u O(l)} {[l, u) | u O2, l O(u)} where O1 = {esti | i I} {ecti | i I} {lsti | i I} O2 = {lsti | i I} {ecti | i I} {lcti | i I} O(t) = {esti + lcti t} (4) It is also possible to filter the starting times using a rule similar to the checker rule. Baptiste proposed a Θ(n3) algorithm to do so. Tesch (2018) and Ouellet and Quimper (2018) independently proposed algorithms in O(n2 log n) and O(n2 log2 n) respectively. Lazy Clause Generation Several modern constraint programming solvers, such as Chuffed (Chu 2011) and OR-tools (Perron and Furnon 2019), use the lazy clause generation technique (Ohrimenko, Stuckey, and Codish 2009) to enhance their performances. The technique requires that the solver uses a SAT engine as an additional propagator. Each integer variable is also encoded with multiple Boolean variables, for the benefit of the SAT engine. For instance, an integer variable X can be encoded with boolean variables named JX 1K, JX 2K, etc. Furthermore, checker and filtering algorithms must explain the failures and the filtering they produced using SAT clauses. This allows the SAT engine to deduce nogoods, redundant SAT constraints that are added during the search to prevent the solver from making redundant unfruitful exploration. For instance, a checker algorithm that fails because the value of a variable X is too low could explain it with the clause JX 5K fail, where JX 5K is a literal negating the Boolean variable that encodes the fact that X is greater or equal to 5. To allow the SAT engine to generate nogoods that can be reused more often, one produces the most general clause possible, with as few and as general literals as possible. For instance, the literal JX 2K is more general than JX 3K because the latter implies the former. It is often challenging to generate general explanations for global constraints, since, by their very nature, global constraints work on several variables, often leading to complex explanations. This means that, unlike constraint programming without nogoods, more filtering is not necessarily better, even without considering the running time of the algorithms. A decomposition of a global constraint that uses binary constraints with small and general explanations could be better than a global constraint with poor explanations but better filtering. Without nogoods, one would need to balance the running time of an algorithm with its filtering strength. With lazy clause generation, one must also consider the quality of the explanations. One can use experiments to find out the best balance between these three factors. Related Work De Clercq et al. (2010) introduced a constraint that extends the CUMULATIVE constraint with the following additional parameters and variables: A sequence J = 0, J1, . . . , Jk 1, lct I forming consecutive intervals such that interval i is defined as [Ji 1, Ji) A sequence L of capacities such that Li C is the capacity in the i-th interval of J. A sequence Cost of integer variable such that Costi is the overcost in the i-th interval of J. An integer variable Z representing the global penalty of the constraint. A function cost Function {max, sum} indicating if the Cost variables should be the maximum or the sum of the overcost in the intervals. A function penalty Function {max, sum} indicating if the penalty variable Z should be the maximum between or the sum of the Cost variables. The authors proposed both a O((n + |J|) log(n + |J|)) Time Tabling and an O(n |J| k log(n)) Edge Finding algorithm, where k n is the number of distinct heights among the tasks. They do not generate explanations of the filtering. 3 Soft Cumulative We define the SOFTCUMULATIVE constraint as follows. SOFTCUMULATIVE( S, p, h, C, Z, f) t f max(0, X i I:Si t Z = fail (7) Algorithm 1 takes as input the set of tasks, the capacity of the resource, and a set of critical time points T. By changing the number of critical time points, we can change the complexity of the algorithm, which is Θ(|T|2), but also the quality of the resulting lower bound. With more time points, Algorithm 1: Overcost Bound(I, C, T) Φ 0; π [nil, . . . , nil]; for u = 2..| T | do for l = 1..u 1 do c Φ[l] + overcost(T[l], T[u])); if Φ[u] < c then Φ[u] c; π[u] l; return Φ[|T|], π ; Algorithm 2: Soft Energetic Checker(I, C, T, Z) α, π Overcost Bound(I, C, T); if α Z then return pass else return fail; the algorithm is slower, but the lower bound is better. The Overcost Bound algorithm returns the lower bound and the vector π, a parent vector containing the choices made by the dynamic programming. The vector π is not relevant to the checker, but it is later used by the filtering algorithm. Once the lower bound is computed, we can check whether it is greater than the upper bound Z of the overcost (Algorithm 2). If so, the algorithm returns a failure. Let Te be the set of O(n2) time points in (4) used by the intervals of interest of Baptiste et al. (2001). Let Ts = {esti | i I} {ecti | i I} {lsti | i I} {lcti | i I} be the set of 4n critical time points of each task. Theorem 1. If Z is set to 0 and T to Te, then Soft Energetic Checker is equivalent to the energetic checker for the CUMULATIVE constraint. Proof. With an upper bound on 0 for the overcost, Soft Energetic Checker returns a failure if and only if the computed lower bound is 1 or more. There are two cases. If the classic energetic checker passes, there does not exist an interval for which the capacity is exceeded. This means that the overcost of all intervals is 0. In that case, the lower bound found by our algorithm is 0 and the Soft Energetic Checker passes. If the classic energetic checker fails, there is at least one interval for which the capacity is exceeded (and for which the overcost is greater than 0) and that interval is one of the intervals of interest. While searching for the longest path, the Soft Energetic Checker processes all possible intervals of positive length formed by the time points in T. Since T corresponds to the lower bounds and upper bounds of the interval of interest, all intervals of interest (and some intervals that are not of interest) are examined, including the one with a positive overcost. Hence, the longest path has a cost of at least 1 and the Soft Energetic Checker fails. Since there are O(n2) lower bounds and O(n2) up- per bounds in Te, considering them all would lead to a Soft Energetic Checker with a complexity of O(n4), which is not reasonable for an algorithm that is executed thousands of times during the search. Instead, we propose to use the subset Ts that is linear in size. This subset gave us good results while keeping the complexity of the algorithm reasonable. Furthermore, this subset is sufficient to apply two weaker rules, the Time-Tabling and the Edge-Finding, as shown in Theorem 2. Theorem 2. If Z = 0 and the set T = Ts, then the Soft Energetic Checker applies the Time-Tabling. Proof. The Time-Tabling rule detects a failure if, at any time point, the sum of the compulsory parts of the tasks is greater than the capacity of the resource. Suppose that t is such a time point. Let lsti be the latest lst less than or equal to t and let ectj be the earliest ect greater than or equal to t. By definition, the total amount of energy consumed by the compulsory parts can only increase at time points that are lst and it can only decrease at time points that are ect. Thus, the amount of energy consumed by the compulsory parts at each time point in [lsti, ectj) is the same as the amount at t. Since the amount consumed at t is sufficient to overload the resource, the total amount of energy in [lsti, ectj) is sufficient to overload the interval. Our checker algorithm examines every interval formed from time points in T and all ect and lst are in that set. Thus, our checker raises a failure when examining [lsti, ectj). 5 Filtering Algorithm We propose an algorithm based on the checker for filtering the earliest starting time of the tasks (see Algorithm 3). Filtering the latest completion time is symmetric. The idea behind the algorithm is to schedule a task to its earliest and then run the checker algorithm to check if it is a valid solution. If it is not, we can filter the earliest starting time. We use two strategies to save costly calls to the checker. First, we use a constant time check to see whether it is possible that scheduling a task to its earliest makes the checker fail. Second, we use the longest path returned by the checker in case of a failure to filter the earliest starting time of the task as much as possible without recalling the checker. We do this by sliding the execution of the task from its earliest starting time up to a point where the cost of the path no longer causes a failure. Each time the task is shifted by one unit of time, the weight of at most four edges on the path needs to be updated, namely the edges that span the intervals that contain esti, esti +1, ecti, and ecti +1. The algorithm starts by computing a lower bound Z on the overcost using the checker. Then, for each task i, it performs a check, at line 1, to verify whether the task needs to be filtered. The check changes slightly depending on the cost function. In the linear case, if the free energy of the task, which is the portion of the task that is not compulsory, is greater than the difference between the computed lower bound and the upper bound on the overcost, the task might need to be filtered. Otherwise, scheduling the task at its earliest cannot sufficiently increase the lower bound to cause a Algorithm 3: Soft Energetic Filtering(I, C, T, Z, f) e Min 0; if f is a linear function then Z Overcost Bound(I, C, T); e Min Z Z; for i I do 1 if hi (pi max(0, ecti lsti)) > e Min then t lcti; lcti esti +pi; Z Overcost Bound(I, C, T); while Z > Z do esti esti +1; lcti lcti +1; 2 Z Overcost Bound(I, C, T); lcti t; return {esti | i I}; Algorithm 4: Compute Longest Path(I, C, T, π) u |T|; P [T[u]]; while u > 1 do u π[u]; Push T[u] to the front of P; return P; failure and the task does not need to be filtered. Indeed, by fixing task i to its earliest, we are effectively only using its left-shift. Thus, the minimum intersection in some intervals can increase, but only by the free energy. The energy from the compulsory part is already included because, by definition, it is both in the left and right shift. This check gains in importance as the search reaches the bottom of the search tree. More tasks are fixed, fewer tasks has free energy, and fewer tasks need to be filtered. For a non-linear cost function, one additional unit of energy can increase the cost by more than one unit. Thus, the check only verifies whether the task has free energy. If the algorithm finds that a task needs to be filtered, it fixes the task to its earliest starting time and runs the checker. If there is a failure, the task cannot be executed at its earliest and needs to be filtered. To do so, the algorithm increment its esti by one. To continue the filtering process, lcti is also incremented. The process is repeated until the computed lower bound on the overcost does not exceed the upper bound. Note that even if the checker algorithm initially reports a success, it is possible for the filtering algorithm to remove all possible values for the starting time of a task, leading to a failure. This is because the energetic reasoning relaxation allows all tasks to be preempted. However, by fixing a task to its earliest, our filtering algorithm removes the possibility of preemption for the task being filtered. To improve the efficiency of the algorithm, one can compute the longest path P using Compute Longest Path, Algorithm 5: Adjust Cost(I, i, P) Create a task i with esti +1, lcti +1, pi, hi ; I I \ {i} {i }; D {k | x {esti, esti +1, ecti, ecti +1}, Pk x < Pk+1}; return P k D overcost(I , Pk, Pk+1) P k D overcost(I, Pk, Pk+1); then replace line 2 by Z Z + Adjust Cost(I, i, P). This allows filtering task i as much as possible on the longest path without having to re-run the checker. The lower bound Z is instead adjusted incrementally. Algorithm 5 works as follows. Since the increment is only by one, the minimum intersection can change in at most four intervals: the interval containing the current esti, the interval containing the current ecti and the intervals immediately following these two intervals if the esti or ecti is the upper bound on the interval. Computing the change in cost is done in constant time by keeping in cache the minimum intersection for each interval. With this adjustment, the complexity of the algorithm is O(k | T |2 + r) where k n is the number of tasks for which the free energy check passes and r is the total number of values removed from the domains. Even if r can be as high as the makespan, it is not necessarily bad since each value removed from the domain of a variable helps reduce the search space. We want to minimize the amount of time spent by the algorithm that leads to no filtering. Spending time if we are sure to filter is acceptable. If we use the set Ts of 4n time points as we suggest for the checker algorithm, we have a complexity of O(k n2 + r). If we instead use the set Te, we have a complexity of O(k n4 + r). Note that it is also possible to increase the earliest starting time by more than one unit at a time, but such an adjustment depends on the cost function f(x). Theorem 3. If Z = 0 and the set T = Ts the Soft Energetic Filtering algorithm applies the Edge Finding rule. Proof. Suppose the Edge Finding rule (2) finds a precedence stating that a set Ωmust end before task i ends. The Soft Energetic Filtering algorithm filters task i until the precedence is no longer detected. Indeed, if the Edge Finding rule holds for i and Ω, this means that there is not enough energy in [estΩ {i}, lctΩ) for both the energy of the tasks in Ωand the energy of i, given that i is left-shifted. In other words, when i is leftshifted, overcost(estΩ {i}, lctΩ) > 0. Since both the lower bound and the upper bound on that interval are est and lct time points, they are in Ts. Thus, when filtering task i, the Soft Energetic Filtering algorithm finds at least one path with positive overcost. Since Z = 0, task i is filtered by at least 1 unit. This reasoning holds as long as the precedence is detected by the Edge Finding rule. 6 Explanations We show how to generate explanations to allow our algorithms to be used with lazy clause generation solvers. The goal is to have explanations with as few and as general literals as possible. However, due to the global nature of SOFTCUMULATIVE, failures or filtering is caused by the interaction between several tasks and the overcost variable. This means that our explanations will necessarily contain multiple variables and will not be as general as explanations from binary constraints. To explain a failure, we use an explanation of the form conditions = fail. To explain filtering, we instead use the form conditions = JSi est i K. The conditions are the same for failure and for filtering since it is caused, in both cases, by an excess in the energy of some tasks in the intervals along a path. For that reason, the rest of the section focuses on the conditions in the left side of the implication. In any explanation for SOFTCUMULATIVE, we must include the literal JZ ZK for the upper bound on the overcost variable. There are several ways to explain the start variables of the tasks. A simple, naive explanation would include the literals JSi esti K JSi lsti K for each task i. However, not all the tasks in the scope of SOFTCUMULATIVE contribute to the failure or filtering. In fact, only the tasks that have a positive minimum intersection in one of the intervals with positive overcost on the longest path contribute to the failure/filtering. This means we can exclude the other tasks from the explanation. It is possible to generalize some literals. The minimum intersection of a task in an interval comes from the minimum between the left-shift and the right-shift. We can reduce the maximum between the left and the right shift up to the other value. For instance, if the left-shift of a task is 4 and its rightshift is 3, we can reduce the left-shift to 3 while still keeping our explanation valid. We can do that by increasing the est for the left-shift and reducing the lst for the right-shift, as shown in (8). Note that this rule may produce multiple literals with the same variable. It is possible to simplify the literals JSi a K JSi b K to JSi max(a, b)K to keep only the most restrictive literal. i I,1 k<|P |: overcost(Pk,Pk+1)>0 MI(i,Pk,Pk+1)>0 JSi Pk+1 x K JSi Pk + x + pi K where x = min(LS(i, l, u), RS(i, l, u)) 7 Experiments To ascertain the practical relevance of our SOFTCUMULATIVE checker and filtering algorithms, we used an adaptation to the Resource Constrained Project Scheduling Problem (RCPSP), which is a problem often used in the literature to benchmark the CUMULATIVE constraint. In the RCPSP problem, the goal is to schedule n tasks on m machines of various capacities. Each task has a fixed processing time and is executed simultaneously on all machines. The amount of resource a task consumes varies by machine and can be zero. There are precedence constraints between tasks and the goal is to minimize the makespan. With SOFTCUMULATIVE, we instead want to minimize the cost of overloading the resources. It is possible to adapt existing RCPSP instances (that do not allow overloads) by finding the makespan of the optimal solution and then either reducing the capacity of the resources or decreasing the makespan. With this approach, the makespan becomes a parameter of the instance and the goal is now to minimize the sum of overcost over all resources. We tried both approaches, but, with our benchmarks, reducing the capacity of the resources gave us more interesting instances. Reducing the makespan often produced unsatisfiable instances due to the precedence constraints. We used an adapted version of the KSD15 D benchmark (Kon e et al. 2011). These instances are highly cumulative (more than one task can often execute at a time). We adapted it by decreasing the capacity of each resource by four units and fixing the makespan to the optimal value of the original instance. This gave us interesting instances, several of which can be solved in a reasonable time. These adapted instances are available in the supplementary material. We implemented our algorithms in C++ and used the lazy clause generation solver Chuffed (Chu 2011). We used the modelling language Minizinc (Nethercote et al. 2007) to implement our models. Our experiments were run on an Intel Xeon 4110 CPU with a speed of 2.10Ghz. We ran all our experiments with a timeout of 20 minutes. In our experiments, we compared our algorithms to the decomposition of SOFTCUMULATIVE (see Equation 5). We tested two configurations of our algorithms: the checker alone and the checker combined with the filtering algorithm. For both algorithms, we set T = Ts. In our experiments, the checker and filtering combination always outperformed the checker alone, so we only report the former to save space. The interested reader can find the raw results in the annex, including the checker alone configuration. (De Clercq et al. 2010) kindly gave us access to their Java code that uses the solver Choco 2. We tried to solve our instances with their constraint. However, they only implemented the version of the constraint that computes the sum of the maximum overloads in each interval. To model our problem with this version, we need one interval per time point. Hence, the complexity depends on the makespan. This is not a good match for our instances with hundreds of time points. Even for easy instances, we were not able to find the optimal solution within a few hours with this approach. In addition, their constraint does not support a quadratic cost function, which is the most interesting application of our SOFTCUMULATIVE. Furthermore, naive explanations must be used with their algorithms since no method of generating explanations was proposed for these algorithms. Conversely, our constraint cannot be used to solve their instances since we do not support a maximum penalty function. Linear Cost Function Figure 4 (left) compares the time taken by the decomposition and the filtering algorithm when a linear cost function f(x) = x is used. Almost all instances are solved much more quickly with the SOFTCUMULATIVE constraint and its filtering algorithm than by the decomposition. Furthermore, there are many instances for which the decomposition times out at 20 minutes (the points at 1200 seconds) while the filtering algorithm is able to solve them to optimality. The decomposition is better on only 1% of the instances. Both configurations failed to solve 21.5% of the instances to optimality and the filtering algorithm is better on 77.5% of the instances. Furthermore, for nearly 68% of the instances, the filtering algorithm is more than 10 times faster. 0 500 1000 Time Decomposition (s.) Time Filtering (s.) Linear cost function 0 500 1000 Time Decomposition (s.) Time Filtering (s.) Quadratic cost function Figure 4: Time in seconds to solve the instances of the adjusted KSD15 D benchmark to optimality with a linear (left) and quadratic (right) cost function for the decomposition and the filtering algorithm. A time of 1200s means that the solver times out without proving the optimal solution. Quadratic Cost Function Figure 4 compares the time taken to solve the instances of the KSD15 D benchmark with the quadratic cost function. The results are similar to the linear case. Neither algorithm proved optimality for 24.6% of the instances. The decomposition is better for 2.7% of the instances while the filtering algorithm is better on 72.7% of the instances. The filtering algorithm is 10 times faster for 60% of the instances. Hence, the filtering algorithm is the method of choice to employ with both linear and quadratic cost functions. 8 Conclusion We proposed a checker and a filtering algorithm based on the energetic reasoning for the SOFTCUMULATIVE constraint. Unlike previous work, both algorithms support a quadratic cost function in addition to a linear function. They are parametrable in the sense that their strength and complexity vary depending of the number of time points passed as parameters. With the recommended set of time points Ts, the checker has a complexity of O(n2) and the filtering algorithm has a complexity of O(k n2 + r) where k is the number of tasks that pass the free energy check and r is the number of values pruned from the domain of the variables after the execution of the algorithm. We showed how to explain the algorithms to use them with lazy clause generation solvers and presented evidence that, in practice, our algorithms vastly outperformed the decomposition. References Aggoun, A.; and Beldiceanu, N. 1993. Extending CHIP in order to solve complex scheduling and placement problems. Mathematical and computer modelling, 17(7): 57 73. Baptiste, P.; Le Pape, C.; and Nuijten, W. 2001. Constraint Based Scheduling. Kluwer Academic Publishers. Beldiceanu, N.; and Carlsson, M. 2002. A new multiresource cumulatives constraint with negative heights. In International Conference on Principles and Practice of Constraint Programming, 63 79. Springer. Chu, G. 2011. Improving combinatorial optimization. Ph.D. thesis, Department of Computing and Information Systems, University of Melbourne. De Clercq, A.; Petit, T.; Beldiceanu, N.; and Jussien, N. 2010. A soft constraint for cumulative problems with overloads of resource. In Doctoral Programme of the 16th International Conference on Principles and Practice of Constraint Programming (CP 10), 49 54. Kon e, O.; Artigues, C.; Lopez, P.; and Mongeau, M. 2011. Event-based MILP models for resource-constrained project scheduling problems. Computers & Operations Research, 38(1): 3 13. Lopez, P.; and Esquirol, P. 1996. Consistency enforcing in scheduling: A general formulation based on energetic reasoning. In 5th International Workshop on Project Management and Scheduling (PMS 96). Mercier, L.; and Van Hentenryck, P. 2008. Edge finding for cumulative scheduling. INFORMS Journal on Computing, 20(1): 143 153. Nethercote, N.; Stuckey, P. J.; Becket, R.; Brand, S.; Duck, G. J.; and Tack, G. 2007. Mini Zinc: Towards a standard CP modelling language. In International Conference on Principles and Practice of Constraint Programming, 529 543. Springer. Ohrimenko, O.; Stuckey, P. J.; and Codish, M. 2009. Propagation via lazy clause generation. Constraints, 14(3): 357 391. Ouellet, Y.; and Quimper, C.-G. 2018. A O(n2 log n) Checker and O(n2 log n) Filtering Algorithm for the Energetic Reasoning. In International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research, 477 494. Springer. Perron, L.; and Furnon, V. 2019. OR-Tools. https:// developers.google.com/optimization/. Accessed: 2022-0329. Tesch, A. 2018. Improving energetic propagations for cumulative scheduling. In International conference on principles and practice of constraint programming, 629 645. Springer.