# glnerf_gausslaguerre_quadrature_enables_trainingfree_nerf_acceleration__93070e63.pdf GL-Ne RF: Gauss-Laguerre Quadrature Enables Training-Free Ne RF Acceleration Silong Yong Yaqi Xie Simon Stepputtis Katia Sycara Carnegie Mellon University {silongy, yaqix, sstepput, sycara}@andrew.cmu.edu Volume rendering in neural radiance fields is inherently time-consuming due to the large number of MLP calls on the points sampled per ray. Previous works would address this issue by introducing new neural networks or data structures. In this work, we propose GL-Ne RF, a new perspective of computing volume rendering with the Gauss-Laguerre quadrature. GL-Ne RF significantly reduces the number of MLP calls needed for volume rendering, introducing no additional data structures or neural networks. The simple formulation makes adopting GLNe RF in any Ne RF model possible. In the paper, we first justify the use of the Gauss-Laguerre quadrature and then demonstrate this plug-and-play attribute by implementing it in two different Ne RF models. We show that with a minimal drop in performance, GL-Ne RF can significantly reduce the number of MLP calls, showing the potential to speed up any Ne RF model. Code can be found in project page https://silongyong.github.io/GL-Ne RF_project_page/. 1 Introduction Neural Radiance Fields (Ne RFs) [27] have shown promising results for synthesizing images from novel views. Plenty of works extend Ne RF towards different aspects applicable in the real world (see related works for details). The core component for Ne RF s success is volume rendering, which requires approximating an integral by densely sampling points along the ray and evaluating volume density and radiance using neural networks for them. In practice, a dense set of points is evaluated by expensive operations like neural network inferences for a single pixel, which could be redundant. Works have been done to reduce the time needed for rendering images, aiming at providing Ne RF with a real-time rendering ability [9, 44, 24, 29, 8]. Despite the promising results shown by these works, they propose different approaches for achieving real-time rendering by introducing new networks, new data structures, etc. Therefore, each individual work requires training from scratch with a specific optimization goal. In this work, we propose a novel lightweight method that could be implemented in any existing Ne RF-based models that require volume rendering without further training. In contrast to existing works, our approach introduces no additional representation or neural network and is training-free. We make minimal modifications to the computation of the volume rendering integral, making it rely on much fewer samples. Our approach arises from revisiting the volume rendering integral, the key discovery is that with a simple change of variable, we can turn the integral into a pure exponentially weighted integral of color. This specific form has a Gauss quadrature (i.e. the Gauss-Laguerre quadrature) which best approximates it mathematically. Naturally, we propose to use the Gauss-Laguerre quadrature to directly compute the volume rendering integral, which we call GL-Ne RF (Gauss Laguerre-Ne RF), leading to much lower computational cost for approximating the integral and therefore lower time and memory usage. Computing the points needed for the integral requires a dense evaluation of per-point density. However, the efficiency for this step can be improved using modern techniques 38th Conference on Neural Information Processing Systems (Neur IPS 2024). Figure 1: GL-Ne RF method overview. The vanilla volume rendering in Ne RF requires uniform sampling in space. This leads to a huge number of computationally heavy MLP calls since we have to assign each point a color value. Our approach, GL-Ne RF, significantly reduces the number of points needed for volume rendering and selects points in the most informative area. like factorized tensors [6]. Benefiting from the guarantee of the highest precision Gauss quadrature provides, only a very small number of fixed points could provide comparable results to the heavy and redundant strategy Ne RF adopts, leading to free speedup. To verify the use of the Gauss-Laguerre quadrature, we conduct an empirical study on the landscape of color function. We also analyze the relationship between our approach and other techniques that aim to reduce the sample points for Ne RF. We demonstrate the plug-and-play property of our method by directly incorporating it into vanilla Ne RF and Tenso RF models that are already trained on Ne RF-Synthetic and LLFF datasets. Furthermore, we showcase the drop in time and memory usage as a direct outcome of reducing the computational cost. GL-Ne RF provides a different perspective for computing volume rendering and has the potential to be a direct plug-in for existing Ne RF-based products. Specifically, our contributions are three-fold. We propose GL-Ne RF, a brand new perspective for computing volume rendering with the Gauss-Laguerre quadrature with no additional component introduced. We analyze the validity of using the Gauss Laguerre quadrature for volume rendering integral and the relationship between our approach and existing sample-efficient Ne RFs. We demonstrate that GL-Ne RF could be incorporated into any Ne RF model without further training. To the best of our knowledge, GL-Ne RF is the first method that could be used without training in any Ne RF models thanks to the simple formulation. We showcase that GL-Ne RF reduces computational cost, time and memory usage while keeping the rendering quality. 2 Related work Volume rendering. Volume rendering has been widely used in computer graphics and vision applications [25, 43, 7]. It maps a 3D scene onto 2D images by a weighted integral over the color of the points along the corresponding rays with a function of opacity (volume density) as weight. In practice, the integral is approximated using a finite sum over sampled points along the ray as derived in [25]. Implicit scene models like Ne RF [27], Plenoxels [8] and 3D gaussians [18] and most of their follow-up all adopt this technique as the render pipeline. Since randomly sampling in space for approximating the integral may bring unnecessary information (i.e. sampling in empty space) that may cost extra computation, plenty of works aim to address that by introducing different techniques for better approximation of the component needed for volume rendering integral (i.e. volume density, radiance) [40, 44, 29, 23, 21, 2, 36]. PL-Ne RF [40] proposes to use piecewise linear function for approximating the volume density throughout the space, leading to fewer points needed for the fine stage sampling proposed by [27]. Auto Int and DIVe R [23, 44] introduce a neural network for approximating the integral of volume density instead of using Monte-Carlo sampling. DONe RF [29] reduces the sampled point needed for computing the integral by introducing a depth oracle neural network that predicts the surface position of the underlying scene and samples the points near the Algorithm 1 Gauss Laguerre Quadrature for Volume Rendering Input: ray direction d, ray origin o, step size t, sample number M, Gauss-Laguerre quadrature weight look-up table Lw, point look-up table Lp 1: tmin, tmax = Ray Intersect Bounding Box(d, o) 2: if tmin > tmax then return bg_color 3: Initialize t = tmin, transmittance T = 1.0, already sampled point number n = 0 4: while t < tmax do 5: if (n == m) then break 6: pos = o + t d 7: σ = Get Volume Density(pos) 8: x = log(T) 9: xnext = x + t σ 10: if x < Lp[n] and xnext Lp[n] then 11: t Laguerre = (Lp[n] x)/(xnext x) 12: possample = o + (t + t Laguerre t t) d 13: ray_color+ = Lw[n] Get Color(possample) 14: n = n + 1 15: t = t + t 16: T = T exp( σ t) 17: bg_weight = sum(Lw[n :]) 18: ray_color = ray_color + bg_weight bg_color 19: return ray_color surface, which contributes the most to the visual effect in the images. MCNe RF [13] proposes to use Monte-Carlo rendering and denoising to do sample efficient rendering, but it still introduces a denoiser network that requires per-scene training. Different from these previous works, Our work proposes to use the Gauss-Laguerre quadrature to directly improve the precision of the volume rendering integral itself, introduces no additional neural networks or data structures and remains in the simplest version, leading to its adaptability into any existing work that relies on volume rendering integral. Ne RFs. Neural Radiance Fields (Ne RFs) have proved to be a powerful tool for novel view synthesis [27]. It uses a coordinate-based multi-layer perceptron (MLP) to represent the scene and render high-fidelity images from different views. The render is done by pixel-wise volumetric rendering [25] with density and color evaluated using the MLP on hundreds of sampled points along the ray. For modeling high-frequency information in the scene, Ne RF uses positional encoding to map the input coordinates onto high-frequency bands. The success of Ne RF has triggered an explosive emergence of follow-up works. There are plenty of works focusing on improving or extending the ability of Ne RF towards different aspects. Aliasing along xy coordinates has been tackled [3], unbounded scenes [4, 47, 39, 34], dynamic scenes [32, 22, 30] and scenes with semantic information [41, 37, 49, 19] have been well explored and demonstrated the potential of implicit scene representation with Ne RF. Nonetheless, Ne RF requires plenty of time for training and rendering, blocking its way of being used for real-time rendering. The bottleneck of the computation time is the MLP used. There are two main branches of work for extending Ne RF towards real-time rendering. The first branch introduces different data structure [46, 9, 14, 33, 8, 6] for scene representation. Another branch, in which our method falls, improves the sample efficiency of the model [20, 29, 31, 38] to accelerate Ne RF rendering process. While previous works draw their intuition from the underlying physics perspective and thus need different formulations of the sampling strategy and different neural network architecture for predicting the surface position of the underlying scenes, we propose our method based on a mathematical observation while maintaining the overall pipeline. Benefiting from this, our work could be seamlessly incorporated into any existing Ne RF-related works without further training. On the other hand, despite being derived from the mathematical perspective, our method still intuitively satisfies the underlying physical constraints. 3 Preliminaries 3.1 Ne RF and volume rendering Ne RF [27] is a powerful implicit 3D scene model for novel view synthesis. At the core of its rendering ability is volume rendering. Ne RF uses coordinate-based MLP to encode the scene, assigning volume density (opacity) and radiance (color) to spatial points. When used for synthesizing new views, it casts a ray r(t) = o + td through the pixel to be rendered, sample points along the ray and compute volume density and radiance for these points. These values are then aggregated together using Eq.1 to give the color of the pixel. i=1 wic(r(ti)), (1) where wi = Ti(1 exp( σ(r(ti))δi)), (2) j=0 σ(r(tj))δj), (3) ti represents the sampled position along the ray and δi = ti+1 ti is the distance between two nearby sampled points. Ne RF uses an MLP to represent volume density σ and color c. The loss function for training Ne RF is simply the square error between rendered pixel colors and the corresponding pixel colors over batch of rays R. Variants of Ne RF like Tenso RF[6] use different representations for volume density and color, but the process of volume rendering remains the same. r R ˆC(r) C(r) 2 2 (4) 3.2 Gauss quadrature An n-point Gauss quadrature [10] is a method for numerical integration that guarantees to yield exact results for integral of polynomials of degree 2n 1 or less, which is the highest possible precision for approximating an integral by quadrature. Intuitively, consider approximating an integral using quadrature as in Eq. 5 Z 1 i=1 wip(xi), (5) where p(x) is a polynomial of degree 2n 1, w(x) is a weight function and I is the interval for computing the integral. We first give the definition of orthogonality of two polynomials pm(x) and pn(x) Z 1 1 pm(x)pn(x)dx = 0, (6) where pm(x) is of degree m, pn(x) is of degree n and m = n. we can use long division for p(x) to obtain p(x) = q(x)Ln(x) + r(x), (7) where Ln(x) is a polynomial of degree n that is orthogonal to any polynomials that have degree less than n (i.e. n degree Legendre polynomial), q(x) and r(x) are both polynomials with degree less than n. Then Z 1 1 p(x)dx = Z 1 1 q(x)Ln(x)dx + Z 1 1 r(x)dx. (8) Since Ln(x) is orthogonal to any polynomials with degree less than n, the first term on the right hand side of Eq. 8 should equal to 0. Since it doesn t contribute to the computation of the integral, we may also neglect it when computing the quadrature. Therefore, we should choose xi that satisfies Ln(xi) = 0 [16]. With this intuition bearing in mind, carefully choosing the weights wi for computing the quadrature would help us precisely calculate Eq. 5 because we have n points to compute the second term on the right hand side of Eq. 8, which is an integral of a polynomial of degree less than n. In general, given a function f(x), Gauss quadrature computes its integral on [ 1, 1] using i=1 wif(xi), (9) where xi, i = 1, 2, . . . , n corresponds to a root of the orthogonal polynomials on [ 1, 1]. This quadrature is called Gauss-Legendre quadrature since the orthogonal polynomials on [ 1, 1] with a weight function g(x) = 1 are Legendre polynomials. An n-th degree Legendre polynomial takes the form [35, 15, 17] Pn(x) = 1 2nn! dn dxn (x2 1)n, (10) and wi is computed using Eq. 11 as shown in [1]. wi = 2 (1 x2 i )[P n(xi)]2 . (11) Figure 2: Verification on using the Gauss-Laguerre quadrature for volume rendering. We plot the red channel of the color function w.r.t. the ray it corresponds to. The color function remains zero in most of the interval (bottom). We use a 7th-degree polynomial to approximate the non-zero region (top). As can be seen, the color function itself is similar to a polynomial, validating the use of our approach. Gauss-Laguerre quadrature is an extension of Gauss quadrature for approximating integrals following the form of 0 e xf(x)dx i=1 wif(xi). (12) In this case, the weight function is g(x) = e x, the integral interval is [0, ). xi corresponds to the root of Laguerre polynomials dx 1)nxn, (13) a class of polynomials that are orthogonal over the interval [0, ) with respect to the weight function g(x) = e x. The weight for computing the quadrature is computed as wi = xi (n + 1)2[Ln+1(xi)]2 . (14) While the computation for xi and wi is complicated, in practice we can use a look up table to store corresponding xi and wi for a given n. We developed our algorithm based on a simple observation of the integral for volume rendering. Eq. 1 is an approximation to the integral C(r) = Z tf tn T(t)σ(r(t))c(r(t), d)dt, (15) T(t) = exp( Z t tn σ(r(s))ds). (16) 4.1 Volume rendering and Gauss-Laguerre quadrature Figure 3: Point Selection strategy in GL-Ne RF. We choose points along the ray that satisfy the integral from zero to the point of the volume density function should be equal to the roots of Laguerre polynomials. The points selected is then used for querying the color. In the figure above is an example of choosing 5 points using a 5-degree Laguerre polynomial. The number on the plot indicates the value of the integral from zero to the right boundary of the region. tn σ(r(s))ds, (17) dt = σ(r(t)). (18) Since σ(r(t)) 0, x(t) is a monotonically non-decreasing function of t, therefore, x has a unique correspondence with t on increasing intervals. With this observation, we can do a change of variables for Eq. 15 to get C(r) = Z tf tn T(t)σ(r(t))c(r(t), d)dt tn e xc(r(t), d)dx x(tn) e xc(r(x), d)dx. As can be seen from Eq. 19, the integral for volume rendering is a weighted integral of c(r(x), d) with the weight function to be g(x) = e x. We can extend the integral interval from [x(tn), x(tf)] to [0, ) since the integral between [0, x(tn)) and (x(tf), ) are zero. Thus, we have 0 e xc(r(t(x)), d)dx, (20) a pure exponentially weighted integral with respect to the color function, which is of the exact same form as required by the Gauss-Laguerre quadrature. 4.1.1 Gauss-Laguerre quadrature for volume rendering As discussed in Sec. 3, the Gauss-Laguerre quadrature guarantees the highest algebraic precision when computing integral over polynomials. To perform the Gauss-Laguerre quadrature for volume rendering integral calculation, a natural question arises: is the color function a polynomial, or can it be approximated by a polynomial with a satisfactory error rate? To answer this question, we first analytically give out a fundamental theorem, and then empirically approximate the color function with polynomials. Theorem 4.1 (Stone-Weierstrass theorem). Suppose f is a continuous real-valued function defined on the real interval [a, b]. For every ϵ > 0, there exists a polynomial p such that for all x in [a, b], we have |f(x) p(x)| < ϵ. Since the pixel color is contributed by points that have a density larger than a threshold (i.e. regions near the surface), we can overlook the points in the empty space and only analyze the remaining part of the color function. In Fig. 2 we plot a representative of how the color function looks like. As can be seen from the figure, it has a major region with values greater than zero while the others remain zero. When approximating the non-zero region with a 7-th degree polynomial, we have a relative error rate lower than 6.5%. While the relative error is not sufficiently low, we argue that we can increase the degree to better approximate it since it s cintinuous by nature. On the other hand, this specific landscape is fluctuated and for most of the cases, the error rate could be smaller than 1%. This suggests that Theorem 4.1 holds in our case, thus the Gauss-Laguerre quadrature can be used for computing the volume rendering integral. 4.1.2 Point selection in GL-Ne RF Different from Ne RF s sampling strategy, the Gauss-Laguerre quadrature enables us to use a deterministic point selection strategy for the color samples. Recall Eq. 17 is the integral variable Figure 4: Comparison between GL-Ne RF and vanilla Ne RF in terms of render time and quantitative metrics. Each point on the figure represents an individual scene. We showcase that with the drop of computational cost GL-Ne RF provides, the average time needed for rendering one image is 1.2 to 2 times faster than the vanilla Ne RF. In the mean time, the overall performance remains almost the same despite some minor decreases. for Eq. 20. This means if we want to use the Gauss-Laguerre quadrature to approximate Eq. 20, we have to choose points xi that are the root of nth-degree Laguerre polynomials. Since every xi has a corresponding ti following Eq. 17, we can choose ti based on given value of xi, as depicted in Fig. 3. Specifically, we want the integral Eq. 17 to be equal to the roots of an nth-degree Laguerre polynomial. Fig. 3 gives an example of n = 5. In the figure, the numbers in the five regions filled with different colors indicate the integral value of the volume density function from zero to the right boundary of the regions. A pseudocode for GL-Ne RF rendering is shown in Algo. 1. 4.1.3 Intuitive understanding of the points selected using the Gauss-Laguerre quadrature 0.17 3.69 10 1 0.90 4.19 10 1 2.25 1.76 10 1 4.27 3.33 10 2 7.05 2.79 10 3 10.76 9.08 10 5 15.74 8.49 10 7 22.86 1.05 10 9 Table 1: Gauss-Laguerre quadrature look-up table when n = 8. Since the points near the surface contribute the most to the final color of the pixel as discussed in [20, 29, 31], the optimal point selection strategy should choose points near the surface. The volume density, on the other hand, increases remarkably near the surface and remains close to zero at other areas. Therefore, the integral value of it Eq. 17 should also increases significantly near the surface and remains almost unchanged throughout the rest of the space. Therefore, most of the points chosen using GL-Ne RF should lie around the surface of the underlying scene. Consider a case when n = 8, we want to choose points ti, i = 1, 2, . . . , 8 such that x(ti) in Eq. 17 should be equal to the value xi given in the look-up table Tab. 1. Notice that the first few value for xi (say first three) are small so that they could be reached by the integral of volume density near the surface easily. These values have relatively larger weights assigned to them. Evaluating the color of these points using a neural network and summing them up using the weights wi given in Eq. 1 following Eq. 12 would contribute mostly to the pixel color. Notice that even though the last few xi are quite large and may not be reached by Eq. 17 along the ray, their corresponding weights are so small that they almost couldn t affect the final result of the pixel color. Hence, the points selected Figure 5: Qualitative results on LLFF (top) and Ne RF-Synthetic (bottom) datasets. We could tell from the comparisons that the drop in performances has minimal effect on the visual quality. Dataset Methods Avg. MLPs PSNR SSIM LPIPS LLFF Tenso RF 118.51 26.51 0.832 0.135 ours 4 25.63 0.797 0.146 Ne RF-Synthetic Tenso RF 31.08 32.39 0.957 0.032 ours 4 30.99 0.945 0.048 Table 2: Quantitative comparison. We demonstrate that our method has a minimal performance drop while significantly reducing the number of color MLP calls. using GL-Ne RF also correspond to the points near the surface, like in previous works [20, 29, 31] that design different neural networks for estimating the surface position, but only without any additional neural networks. Therefore, thanks to the nice property of the Gauss quadrature, ideally we can select the optimal points for computing volume rendering integral if the volume density estimation is oracle. 5 Experiments Datasets and evaluation metrics. We evaluate our method on the standard datasets: Ne RFSynthetic and Real Forward Facing Dataset(LLFF) [26] as in [27] with two different models, i.e. Vanilla Ne RF [27], Tenso RF [6] and Instant NGP [28]. Since our method is training-free, we conduct render-only experiments with the vanilla volume rendering method and our method. We plot the standard render quality evaluation metrics PSNR, SSIM [42] and LPIPS [48] with respect to the average time needed for rendering one image for each scene in Vanilla Ne RF. We also report the metrics with averaged color MLP calls for Tenso RF and Instant NGP. For Vanilla Ne RF, we use 32 points for our method while the network is trained with more than 100 points. For Tenso RF and Instant NGP, the results are produced with 4 MLP calls if not otherwise mentioned. More details can be found in Sec. A.1. 5.1 Comparison with baselines Figure 6: Effect of sample number. The first five columns correspond to the number of sampled points on top. The sixth column shows the result of the original sampling strategy adopted in Tenso RF (Ori). The last column is the ground truth visualization of the details in the scene. Our method could achieve comparable results using only 4-8 points while the original strategy requires more than 100 points. The blurriness in the first two columns is inherently the inaccuracy of piece-wise constant density estimation. Dataset Methods PSNR SSIM LPIPS LLFF Vanilla 27.62 0.88 0.073 ours 27.21 0.87 0.087 Ne RF-Synthetic Vanilla 30.63 0.95 0.037 ours 29.18 0.93 0.056 Table 3: Quantitative comparison when training with GL-Ne RF. Vanilla refers to the vanilla Ne RF and its sampling strategy while ours refers to replacing the fine sample stage in vanilla Ne RF with our sampling strategy, i.e. GL-Ne RF. The result for Vanilla Ne RF is produced by rendering using more than 100 points while GL-Ne RF only uses 32 points. Table 4: Ablation study on the number of points sampled. The more points we have, the better the performance will be. With 8 points, our method is comparable to the original sampling strategy in Tenso RF. Point number PSNR SSIM LPIPS 1 23.49 0.752 0.166 2 24.90 0.782 0.142 3 25.38 0.791 0.145 4 25.63 0.797 0.146 8 26.10 0.812 0.142 Ori 26.51 0.832 0.135 We showcase that our method can be used for rendering novel views based on pretrained Ne RF without further training. We plotted the quantitative metrics of GL-Ne RF and original Ne RF for an intuitive comparison in Fig. 4. It shows that our method achieves comparable results as the original Ne RF while requiring less computation, leading to 1.2 to 2 times faster rendering. We also observed a drop in memory usage due to the fewer MLP calls we have. We further implement our method with Tenso RF [6]. As can be seen from Tab. 5, our method significantly reduces the number of MLP calls needed for volume rendering while the rendering quality only drops a little. We observe that the minimal drop in the performance has little effect on the quality of the image. Some qualitative comparisons can be found in Fig. 5. Other than Tenso RF, we implement our method on top of Instant NGP [28] to showcase the plug-and-play attribute of our method. Our method performs similarly on Blender dataset to Instant NGP as shown in Tab. 5. 5.2 Discussion on acceleration The reason why the speed-up in Vanilla Ne RF doesn t lead to real-time performance is that it has another heavy neural network for estimating the volume density. While our method needs cheap density estimation, it can be easily achieved by recent efforts in Ne RF like factorized tensors [6]. Therefore, reducing the number of color MLP calls needed could lead to real-time performance as shown by previous work [13]. We therefore follow MC-Ne RF [13] and develop a real-time renderer based on Web GL Blender Avg. Chair Drums Ficus Hotdog Lego Mat. Mic Ship PSNR Instant NGP 32.05 34.13 25.61 31.91 36.32 34.72 29.09 34.92 29.73 Ours 30.35 33.08 25.07 30.13 34.78 33.05 26.54 33.02 27.15 Table 5: Per-scene results on Blender dataset between Instant NGP and ours. We demonstrate that GL-Ne RF is able to be plugged into ANY Ne RF models. Method PSNR SSIM LPIPS FPS Tenso RF 33.28 0.97 0.016 5.84 ours 33.09 0.97 0.016 22.34 Table 6: Comparison between our method and Tenso RF on Lego scene using Web GL-based renderer. The result is collected from an AMD Ryzen 9 5900HS CPU. GL-Ne RF is able to provide almost real-time rendering while remaining similar quality as Tenso RF. and train a small variant of Tenso RF with 8 channels for each density component and color component and 32 as hidden size for the color MLP. The result on Lego in the Blender dataset is shown in Tab. 5.4. GL-Ne RF is able to provide almost real-time performance in Web GL with similar quality as Tenso RF running on an AMD Ryzen 9 5900HS CPU thanks to the reduced number of color MLP calls. 5.3 Ablation studies We further study the effect of sampled points per ray. We conduct experiments using the Tenso RF model on the LLFF dataset. We found that 8 points per ray already shows comparable results to the original sampling strategy that uses more than 100 points. Quantitative comparison can be found in Tab. 4. Qualitatively, in Fig. 6 we found that less number of points would lead to blurrier results. Since the points selected using GL-Ne RF intuitively correspond to where the surface is, we argue that the blurriness comes from the inherent inaccuracy of piece-wise constant density estimation. 5.4 Discussion on GL-Ne RF usage for training While we mainly showcase that GL-Ne RF is a general alternative to the sampling strategy for volume rendering at test time, it is also capable of being used for training. We demonstrate this by replacing the fine sample stage in Vanilla Ne RF with GL-Ne RF and show the result in Tab. 5 and Tab. 9. GL-Ne RF is able to produce on-par results with the vanilla sampling strategy but use a much smaller number of points, i.e. 32 for GL-Ne RF and more than 100 for vanilla Ne RF. 6 Conclusion In this paper, we propose GL-Ne RF, a novel approach for calculating the volume rendering integral. We show that with a simple change of variable, the Gauss-Laguerre quadrature can be used for computing the volume rendering integral. Thanks to the highest algebraic precision guaranteed by the Gauss-Laguerre quadrature, GL-Ne RF significantly reduces the number of MLP calls needed for the volume rendering integral. We justify the use of the Gauss-Laguerre quadrature theoretically and empirically and showcase the plug-and-play attribute of GL-Ne RF in two different Ne RF models. Experiments show the potential of GL-Ne RF being used for accelerating any existing Ne RF model. We also demonstrate that GL-Ne RF can be used for training vanilla Ne RF, providing a potential new direction for neural rendering research. Limitations. While GL-Ne RF shows promising results in reducing the number of MLP calls needed, it still affects the rendering quality despite the theoretical guarantee of the highest precision. How to improve the performance so that it would meet the theoretical results would be interesting. Acknowledgement The authors would like to thank the author of MC-Ne RF for the useful tips on developing the Web GL renderer. This work has been funded in part by the Army Research Laboratory (ARL) award W911NF-23-2-0007, DARPA award FA8750-23-2-1015, and ONR award N00014-23-1-2840. [1] Milton Abramowitz, Irene A Stegun, and Robert H Romer. Handbook of mathematical functions with formulas, graphs, and mathematical tables, 1988. [2] Relja Arandjelovi c and Andrew Zisserman. Nerf in detail: Learning to sample for view synthesis. ar Xiv preprint ar Xiv:2106.05264, 2021. [3] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5855 5864, 2021. [4] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5470 5479, 2022. [5] Serge Bernstein. Démo istration du théoréme de weierstrass fondée sur le calcul des probabilités. Commun. Soc. Math. Kharkow, 13(1):1 2, 1912. [6] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision, pages 333 350. Springer, 2022. [7] Robert A Drebin, Loren Carpenter, and Pat Hanrahan. Volume rendering. ACM Siggraph Computer Graphics, 22(4):65 74, 1988. [8] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5501 5510, 2022. [9] Stephan J Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien Valentin. Fastnerf: High-fidelity neural rendering at 200fps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14346 14355, 2021. [10] Carl Friedrich Gauss. Methodvs nova integralivm valores per approximationem inveniendi. Dieterich, 1815. [11] Walter Gautschi. Numerical analysis. Springer Science & Business Media, 2011. [12] Yuan-Chen Guo. Instant neural surface reconstruction, 2022. https://github.com/bennyguo/instant-nsr-pl. [13] Kunal Gupta, Milos Hasan, Zexiang Xu, Fujun Luan, Kalyan Sunkavalli, Xin Sun, Manmohan Chandraker, and Sai Bi. Mcnerf: Monte carlo rendering and denoising for real-time nerfs. In SIGGRAPH Asia 2023 Conference Papers, pages 1 11, 2023. [14] Peter Hedman, Pratul P Srinivasan, Ben Mildenhall, Jonathan T Barron, and Paul Debevec. Baking neural radiance fields for real-time view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5875 5884, 2021. [15] James Ivory. V. on the figure requisite to maintain the equilibrium of a homogeneous fluid mass that revolves upon an axis. Philosophical Transactions of the Royal Society of London, (114):85 150, 1824. [16] Carl Gustav Jakob Jacobi. Ueber gauss neue methode, die werthe der integrale näherungsweise zu finden. 1826. [17] JDG Jacobi. Ueber eine besondere gattung algebraischer functionen, die aus der entwicklung der function (1-2xz+ z2) 1/2 entstehen. 1827. [18] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics (To G), 42(4):1 14, 2023. [19] Abhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Caroline Pantofaru, Leonidas J Guibas, Andrea Tagliasacchi, Frank Dellaert, and Thomas Funkhouser. Panoptic neural fields: A semantic object-aware neural scene representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12871 12881, 2022. [20] Andreas Kurz, Thomas Neff, Zhaoyang Lv, Michael Zollhöfer, and Markus Steinberger. Adanerf: Adaptive sampling for real-time rendering of neural radiance fields. In European Conference on Computer Vision, pages 254 270. Springer, 2022. [21] Liangchen Li and Juyong Zhang. l_0-sampler: An l_{0} model guided volume sampling for nerf. ar Xiv preprint ar Xiv:2311.07044, 2023. [22] Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. Neural scene flow fields for space-time view synthesis of dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6498 6508, 2021. [23] David B Lindell, Julien NP Martel, and Gordon Wetzstein. Autoint: Automatic integration for fast neural volume rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14556 14565, 2021. [24] Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. Advances in Neural Information Processing Systems, 33:15651 15663, 2020. [25] Nelson Max. Optical models for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics, 1(2):99 108, 1995. [26] Ben Mildenhall, Pratul P Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 38(4):1 14, 2019. [27] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99 106, 2021. [28] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG), 41(4):1 15, 2022. [29] Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, Joerg H Mueller, Chakravarty R Alla Chaitanya, Anton Kaplanyan, and Markus Steinberger. Donerf: Towards real-time rendering of compact neural radiance fields using depth oracle networks. In Computer Graphics Forum, volume 40, pages 45 59. Wiley Online Library, 2021. [30] Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, and Felix Heide. Neural scene graphs for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2856 2865, 2021. [31] Martin Piala and Ronald Clark. Terminerf: Ray termination prediction for efficient neural rendering. In 2021 International Conference on 3D Vision (3DV), pages 1106 1114. IEEE, 2021. [32] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10318 10327, 2021. [33] Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14335 14345, 2021. [34] Christian Reiser, Rick Szeliski, Dor Verbin, Pratul Srinivasan, Ben Mildenhall, Andreas Geiger, Jon Barron, and Peter Hedman. Merf: Memory-efficient radiance fields for real-time view synthesis in unbounded scenes. ACM Transactions on Graphics (TOG), 42(4):1 12, 2023. [35] Olinde Rodrigues. De l attraction des sphéroïdes, Correspondence sur l É-cole Impériale Polytechnique. Ph D thesis, Ph D thesis, Thesis for the Faculty of Science of the University of Paris, 1816. [36] Gopal Sharma, Daniel Rebain, Kwang Moo Yi, and Andrea Tagliasacchi. Volumetric rendering with baked quadrature fields. ar Xiv preprint ar Xiv:2312.02202, 2023. [37] Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulò, Norman Müller, Matthias Nießner, Angela Dai, and Peter Kontschieder. Panoptic lifting for 3d scene understanding with neural fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9043 9052, 2023. [38] Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. Light field networks: Neural scene representations with single-evaluation rendering. Advances in Neural Information Processing Systems, 34:19313 19325, 2021. [39] Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P Srinivasan, Jonathan T Barron, and Henrik Kretzschmar. Block-nerf: Scalable large scene neural view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8248 8258, 2022. [40] Mikaela Angelina Uy, Kiyohiro Nakayama, Guandao Yang, Rahul Krishna Thomas, Leonidas Guibas, and Ke Li. Nerf revisited: Fixing quadrature instability in volume rendering. ar Xiv preprint ar Xiv:2310.20685, 2023. [41] Suhani Vora, Noha Radwan, Klaus Greff, Henning Meyer, Kyle Genova, Mehdi SM Sajjadi, Etienne Pot, Andrea Tagliasacchi, and Daniel Duckworth. Nesf: Neural semantic fields for generalizable semantic segmentation of 3d scenes. ar Xiv preprint ar Xiv:2111.13260, 2021. [42] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600 612, 2004. [43] Lee Westover. Interactive volume rendering. In Proceedings of the 1989 Chapel Hill workshop on Volume visualization, pages 9 16, 1989. [44] Liwen Wu, Jae Yong Lee, Anand Bhattad, Yu-Xiong Wang, and David Forsyth. Diver: Real-time and accurate neural radiance fields with deterministic integration for volume rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16200 16209, 2022. [45] Lin Yen-Chen. Nerf-pytorch. https://github.com/yenchenlin/nerf-pytorch/, 2020. [46] Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. Plenoctrees for real-time rendering of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5752 5761, 2021. [47] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. ar Xiv preprint ar Xiv:2010.07492, 2020. [48] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586 595, 2018. [49] Shuaifeng Zhi, Tristan Laidlow, Stefan Leutenegger, and Andrew J Davison. In-place scene labelling and understanding with implicit scene representation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15838 15847, 2021. Here we introduce the implementation details, give a brief introduction of the Gauss-Laguerre quadrature and present our quantitative results on Ne RF-Synthetic and LLFF datasets. A.1 Implementation details For Vanilla Ne RF, our experiments are conducted based upon Ne RF-Py Torch [45], a reproducible Py Torch implementation of the original Ne RF [27]. We implement our method by changing the hierarchical sampling strategy into our point selection method using the Gauss-Laguerre quadrature. We follow the standard setting as done in [27] to train a coarse and a fine network for evaluation. We use a learning rate of 5 10 4 that exponentially decays to 5 10 5 over the course of optimization. Each scene is trained for 200k iterations using a single NVIDIA RTX 6000 GPU. We use 128 coarse samples and 32 fine samples to test our method. Since the density estimation for coarse and fine network are not aligned, we test our method using only the fine network by first using coarse samples to query it for an estimation of density, then use GL-Ne RF to select 32 points for final rendering as discussed in Sec. 4.1.2. LLFF scenes are trained and tested with 64 coarse samples and 64 fine samples for baseline method, while Ne RF-Synthetic scenes require 128 coarse samples and 64 fine samples. For Tenso RF, we directly use the pretrained checkpoints in the folder VM48 provided by the authors. The qualitative results are produced by 4 neural network calls if not otherwise mentioned. For Instant NGP, we build our code on top of the public Py Torch implementation [12] and train it with the default setting. The final results for GL-Ne RF are also produced by 4 neural network calls. A.2 Gauss-Laguerre quadrature The Gauss-Laguerre quadrature is an approximation formula for computing integrals over the semiinfinite interval [0, + ) with the weight function e x and reads as Z + 0 e xf(x)dx k=0 wkf(xk). (21) Here x0, x1, , xn [0, + ) are the zeros of the Laguerre polynomial Ln+1 = Ln+1(x) of degree (n + 1): Ln+1(x) = 1 (n + 1)!ex dn+1 dxn+1 (xn+1e x), for n = 1, 0, 1, , and the coefficients wk = 1 xk[L n+1(xk)]2 , k = 0, 1, 2, , n. (22) From the Leibniz formula, it is easy to see that Ln(x) is a polynomial of degree n and the coefficient of xn is ( 1)n n! . In particular, we have L0 = 1, L1 = 1 x, L2 = 1 2x2 2x + 1, . The fundamental property of the Laguerre polynomials is Theorem A.1. The Laguerre polynomials Ln = Ln(x) are orthogonal with respect to the weight function e x, that is, 0 e x Ln(x)Lm(x)dx = ( 0, n = m, Proof. Assume m n and set gk(x) = xke x. From the Leibniz formula it follows that, for j < k, g(j) k (x) is a product of xe x and a polynomial of degree (k 1) and thereby g(j) k (0) = 0 = g(j) k (+ ) for j < k. Thus, we deduce that 0 e x Ln(x)Lm(x)dx 0 e xexg(n) n (x)exg(m) m (x)dx 0 exg(m) m (x)dg(n 1) n (x) =g(n 1) n (x)[exg(m) m (x)]|+ 0 0 [exg(m) m (x)] dg(n 2) n (x) = g(n 2) n (x)[exg(m) m (x)] |+ 0 0 [exg(m) m (x)] dg(n 3) n (x) 0 gn(x)[exg(m) m (x)](n)(x)dx By the Leibniz formula, we have h exg(m) m (x) i(n) = n! (n j)!j!(ex)(n j)g(m+j) m (x) n! (n j)!j!g(m+j) m (x) and thereby 0 e x Ln(x)Lm(x)dx n! (n j)!j! 0 xng(m+j) m (x)dx n!( 1)n+m+j (n m j)!gm(x)dx n!( 1)n+m+j n! (n m j)!xn je xdx =( 1)n n m X n! (n j)!j!( 1)m+j n!(n j)! (n m j)! =( 1)n+m (n!)2 (n m)! j!(n m j)!( 1)j =( 1)n+m (n!)2 (n m)!(1 1)n m. Here the second equality is similar to that in 23 and the fourth uses Z + 0 xn je xdx = (n j)!. This completes the proof. The orthogonality of the Laguerre polynomials ensures that the Ln(x) s are linearly independent and Ln+1(x) has (n + 1) distinct zeros x0, x1, , xn in [0, + ) [11]. With the zeros, the coefficients wk are chosen so that the following (n + 1) equalities 0 e xxjdx = k=0 wkxj k, (24) hold for j = 0, 1, , n. This leads to a system of (n+1) linear algebraic equations for the unknowns wk and the corresponding coefficient matrix is the Vandermonde matrix [xj k](n+1) (n+1). The latter is invertible since the zeros are distinct and therefore the wk s are uniquely determined. The specific expressions of the wk s are given in 22 [11]. It is remarkable that all the coefficients wk are non-negative. This important property ensures the stability and convergence of the Gauss-Laguerre quadrature [11]. Moreover, we have Theorem A.2. The algebraic precision of the Gauss-Laguerre quadrature 21 is (2n + 1) exactly. Namely, " " in 21 is "=" if f(x) is a polynomial of degree (2n + 1) and is not "=" if f(x) is a polynomial with degree higher than (2n + 1). Proof. Notice that (n + 1)!Ln+1(x) = ( 1)n+1Πn k=0(x xk) is a polynomial of degree (n + 1). Since 0 e x L2 n+1(x)dx = 0 = k=0 wk L2 n+1(xk), the precision is less than 2(n+1). On the other hand, for any polynomial p = p(x) of degree (2n+1) there are two polynomials of degree n such that p(x) = q(x)Ln+1(x) + r(x). Notice that q(x) can be written as a linear combination of L0(x), L1(x), , Ln(x). Compute Pn k=0 wkp(xk) = Pn k=0 wk[q(xk)Ln+1(xk) + r(xk)] = Pn k=0 wkr(xk) = R + 0 e xr(x)dx = R + 0 e x[q(x)Ln+1(x) + r(x)]dx = R + 0 e xp(x)dx. Here the third equality is due to 24 (the choice of wk) and the fourth is due to the orthogonality of Ln+1(x) and q(x) with respect to the weight function. Hence the proof is complete. For further details on the Gauss-Laguerre quadrature and for other Gauss quadratures, the interested reader is referred to the book [11]. A.3 Proof for Theorem 4.1 Here we present a proof of the well-known Stone-Weierstrass theorem, basically from [5]. We rephrase the theorem here for readability. Theorem A.3. Suppose f = f(x) : [0, 1] ( , ) is continuous. Then for any ϵ > 0 there is a polynomial p = p(x) satisfying sup x [0,1] |f(x) p(x)| < ϵ. Namely, polynomials are dense in the Banach space C[0, 1]. Ne RF-Synthetic Avg. Chair Drums Ficus Hotdog Lego Mat. Mic Ship PSNR Tenso RF 32.39 34.68 25.58 33.37 36.81 35.51 29.54 33.59 30.12 Ours 30.99 33.98 25.15 30.41 35.75 33.80 27.32 32.52 30.12 SSIM Tenso RF 0.96 0.98 0.93 0.98 0.98 0.98 0.94 0.98 0.88 Ours 0.94 0.98 0.92 0.96 0.97 0.97 0.91 0.98 0.87 LPIPS Tenso RF 0.032 0.014 0.059 0.015 0.017 0.009 0.036 0.012 0.098 Ours 0.048 0.019 0.068 0.043 0.031 0.015 0.088 0.029 0.095 LLFF Avg. Fern Flower Fortress Horns Leaves Orchid Room Trex PSNR Tenso RF 26.51 25.31 28.22 31.14 27.64 21.34 20.02 31.80 26.61 Ours 25.63 24.11 27.24 30.41 26.86 20.76 18.91 30.82 25.94 SSIM Tenso RF 0.83 0.82 0.86 0.89 0.86 0.75 0.66 0.95 0.89 Ours 0.80 0.76 0.82 0.87 0.83 0.72 0.59 0.92 0.87 LPIPS Tenso RF 0.135 0.161 0.121 0.084 0.146 0.167 0.204 0.093 0.108 Ours 0.146 0.181 0.115 0.089 0.146 0.146 0.255 0.122 0.118 Table 7: Per-scene quantitative comparison between Tenso RF and ours. Proof. Fix f = f(x) C[0, 1] and ϵ > 0. Since f = f(x) is continuous on the bounded closed interval [0, 1], it is bounded and uniformly continuous, meaning that there are positive numbers M > 0 and δ = δ(ϵ) > 0 such that |f(x)| M, |f(x) f(y)| < ϵ for any x, y [0, 1] satisfying |x y| < δ. With δ fixed, let n M δ2ϵ be a positive integer. Consider the nth-order Bernstein polynomial [5] n)Ck nxk(1 x)n k with Ck n = n! k!(n k)!. Notice that, for x [0, 1], Ck nxk(1 x)n k 0, k=0 Ck nxk(1 x)n k = (x + 1 x)n = 1, (26) k=0 k Ck nxk(1 x)n k = n P k=1 k n! k!(n k)!xk(1 x)n k (n 1)! (k 1)!(n 1 (k 1))!xk 1(1 x)n 1 (k 1) = nx(x + 1 x)n 1 = nx, k=0 k2Ck nxk(1 x)n k = n P k=0 k Ck nxk(1 x)n k + n P k=0 k(k 1)Ck nxk(1 x)n k k=2 k(k 1) n! k!(n k)!xk(1 x)n k = nx + n(n 1)x2 n P (n 2)! (k 2)!(n k)!xk 2(1 x)n k = nx + n(n 1)x2. Ne RF-Synthetic Avg. Chair Drums Ficus Hotdog Lego Mat. Mic Ship PSNR Vanilla 30.63 34.32 25.80 29.54 35.49 29.53 29.04 31.78 29.52 Ours 28.56 30.82 24.08 26.62 32.70 28.78 27.19 31.34 27.03 SSIM Vanilla 0.95 0.98 0.93 0.97 0.97 0.95 0.95 0.97 0.87 Ours 0.93 0.96 0.90 0.94 0.96 0.94 0.92 0.97 0.84 LPIPS Vanilla 0.042 0.014 0.052 0.021 0.034 0.042 0.035 0.044 0.092 Ours 0.070 0.050 0.098 0.055 0.059 0.047 0.070 0.044 0.135 LLFF Avg. Fern Flower Fortress Horns Leaves Orchid Room Trex PSNR Vanilla 27.62 26.82 28.37 32.59 28.83 22.38 21.20 32.87 27.93 Ours 26.53 26.27 28.19 31.12 26.81 22.27 20.99 30.38 26.24 SSIM Vanilla 0.88 0.86 0.89 0.93 0.90 0.82 0.74 0.96 0.92 Ours 0.85 0.84 0.88 0.89 0.86 0.81 0.73 0.93 0.89 LPIPS Vanilla 0.074 0.097 0.064 0.030 0.070 0.113 0.122 0.041 0.052 Ours 0.090 0.106 0.066 0.064 0.096 0.115 0.125 0.075 0.075 Table 8: Per-scene quantitative comparison between Vanilla Ne RF and ours. Then, for any x [0, 1], we deduce from (25) (28) that |f(x) p(x)| = k=0 [f(x) f(k n)]Ck nxk(1 x)n k k=0 |f(x) f(k n)|Ck nxk(1 x)n k k:|x k/n|<δ + X k:|x k/n| δ )|f(x) f(k n)|Ck nxk(1 x)n k k:|x k/n|<δ +2M X k:|x k/n| δ )Ck nxk(1 x)n k k:|x k/n| δ n2δ2 Ck nxk(1 x)n k k=0 (nx k)2Ck nxk(1 x)n k n2δ2 (n2x2 2nxnx + nx + n(n 1)x2) = ϵ This completes the proof. A.4 Quantitative results on Ne RF-Synthetic and LLFF datasets In this part, we present our per-scene quantitative results on Ne RF-Synthetic and LLFF datasets in Tab. 8 for Vanilla Ne RF and in Tab. 7 for Tenso RF. The full result for training using GL-Ne RF is presented in Tab. 9 Blender Avg. Chair Drums Ficus Hotdog Lego Mat. Mic Ship PSNR Vanilla 30.63 34.32 25.80 29.54 35.49 29.53 29.04 31.78 29.52 Ours 29.18 32.43 24.38 26.92 33.91 29.49 27.27 31.55 27.47 SSIM Vanilla 0.95 0.98 0.93 0.97 0.97 0.95 0.95 0.97 0.87 Ours 0.93 0.97 0.91 0.94 0.96 0.95 0.92 0.97 0.84 LPIPS Vanilla 0.037 0.014 0.052 0.021 0.034 0.042 0.035 0.044 0.092 Ours 0.056 0.029 0.087 0.050 0.052 0.038 0.065 0.046 0.122 LLFF Avg. Fern Flower Fortress Horns Leaves Orchid Room Trex PSNR Vanilla 27.62 26.82 28.37 32.59 28.83 22.38 21.20 32.87 27.93 Ours 27.21 26.63 28.05 31.93 28.05 22.35 21.12 32.51 27.01 SSIM Vanilla 0.88 0.86 0.89 0.93 0.90 0.82 0.74 0.96 0.92 Ours 0.87 0.85 0.88 0.91 0.88 0.81 0.74 0.95 0.90 LPIPS Vanilla 0.073 0.097 0.064 0.030 0.070 0.113 0.122 0.041 0.052 Ours 0.087 0.121 0.075 0.043 0.089 0.117 0.131 0.053 0.069 Table 9: Quantitative results when training on Blender and LLFF Datasets. Neur IPS Paper Checklist Question: Do the main claims made in the abstract and introduction accurately reflect the paper s contributions and scope? Answer: [Yes] Justification: We prove the claim of reducing computation in Sec. 5, the analysis and justification of using the Gauss-Laguerre quadrature are in Sec. 4.1.1 and Sec. 4.1.2. Guidelines: The answer NA means that the abstract and introduction do not include the claims made in the paper. The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: The limitations are discussed in Sec. 6. Guidelines: The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. The authors are encouraged to create a separate "Limitations" section in their paper. The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: All theories in the paper are proven in the Appendix, specifically Sec. A.2 and Sec. A.3. Guidelines: The answer NA means that the paper does not include theoretical results. All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. All assumptions should be clearly stated or referenced in the statement of any theorems. The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: Details are provided in Sec. 5 and Sec. A.1. Guidelines: The answer NA means that the paper does not include experiments. If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. While Neur IPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: The code can be found in https://silongyong.github.io/GL-Ne RF_ project_page/. Guidelines: The answer NA means that paper does not include experiments requiring code. Please see the Neur IPS code and data submission guidelines (https://nips.cc/ public/guides/Code Submission Policy) for more details. While we encourage the release of code and data, we understand that this might not be possible, so No is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). The instructions should contain the exact command and environment needed to run to reproduce the results. See the Neur IPS code and data submission guidelines (https: //nips.cc/public/guides/Code Submission Policy) for more details. The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: Experimental settings are discussed in Sec. 5 and Sec. A.1. Guidelines: The answer NA means that the paper does not include experiments. The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [No] Justification: Our method is training-free, thus no randomness during test time. Guidelines: The answer NA means that the paper does not include experiments. The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) The assumptions made should be given (e.g., Normally distributed errors). It should be clear whether the error bar is the standard deviation or the standard error of the mean. It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: Compute resources are introduced in Sec. 5 and Sec. A.1. Guidelines: The answer NA means that the paper does not include experiments. The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn t make it into the paper). 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the Neur IPS Code of Ethics https://neurips.cc/public/Ethics Guidelines? Answer: [Yes] Justification: We have read and comply to the Neur IPS Code of Ethics. Guidelines: The answer NA means that the authors have not reviewed the Neur IPS Code of Ethics. If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: Our work is a foundational research. There is no societal impact of the work performed. Guidelines: The answer NA means that there is no societal impact of the work performed. If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: The paper poses no such risks. Guidelines: The answer NA means that the paper poses no such risks. Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: The datasets used in the paper i.e. LLFF and Ne RF-Synthetic are properly cited, the codebase we use are properly cited in Sec. 5 and Sec. A.1. The license for these assets are CC-BY 3.0 for the two datasets, and MIT License for the two codebases. Guidelines: The answer NA means that the paper does not use existing assets. The authors should cite the original paper that produced the code package or dataset. The authors should state which version of the asset is used and, if possible, include a URL. The name of the license (e.g., CC-BY 4.0) should be included for each asset. For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. If this information is not available online, the authors are encouraged to reach out to the asset s creators. 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: No new assets have been created in the paper. Guidelines: The answer NA means that the paper does not release new assets. Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. The paper should discuss whether and how consent was obtained from people whose asset is used. At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: The paper does not involve crowdsourcing nor research with human subjects. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. According to the Neur IPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: The paper does not involve crowdsourcing nor research with human subjects. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the Neur IPS Code of Ethics and the guidelines for their institution. For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.