# dreamfusion_textto3d_using_2d_diffusion__edc52613.pdf Published as a conference paper at ICLR 2023 DREAMFUSION: TEXT-TO-3D USING 2D DIFFUSION Ben Poole1, Ajay Jain2, Jonathan T. Barron1, Ben Mildenhall1 1Google Research, 2UC Berkeley {pooleb, barron, bmild}@google.com, ajayj@berkeley.edu Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs. Adapting this approach to 3D synthesis would require large-scale datasets of labeled 3D data and efficient architectures for denoising 3D data, neither of which currently exist. In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis. We introduce a loss based on probability density distillation that enables the use of a 2D diffusion model as a prior for optimization of a parametric image generator. Using this loss in a Deep Dream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or Ne RF) via gradient descent such that its 2D renderings from random angles achieve a low loss. The resulting 3D model of the given text can be viewed from any angle, relit by arbitrary illumination, or composited into any 3D environment. Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors. See dreamfusionpaper.github.io for a more immersive view into our 3D results. 1 INTRODUCTION Generative image models conditioned on text now support high-fidelity, diverse and controllable image synthesis (Nichol et al., 2022; Ramesh et al., 2021; 2022; Saharia et al., 2022; 2021a; Yu et al., 2022; Saharia et al., 2021b). These quality improvements have come from large aligned image-text datasets (Schuhmann et al., 2022) and scalable generative model architectures. Diffusion models are particularly effective at learning high-quality image generators with a stable and scalable denoising objective (Ho et al., 2020; Sohl-Dickstein et al., 2015; Song et al., 2021). Applying diffusion models to other modalities has been successful, but requires large amounts of modality-specific training data (Chen et al., 2020; Ho et al., 2022; Kong et al., 2021). In this work, we develop techniques to transfer pretrained 2D image-text diffusion models to 3D object synthesis, without any 3D data (see Figure 1). Though 2D image generation is widely applicable, simulators and digital media like video games and movies demand thousands of detailed 3D assets to populate rich interactive environments. 3D assets are currently designed by hand in modeling software like Blender and Maya3D, a process requiring a great deal of time and expertise. Text-to-3D generative models could lower the barrier to entry for novices and improve the workflow of experienced artists. 3D generative models can be trained on explicit representations of structure like voxels (Wu et al., 2016; Chen et al., 2018) and point clouds (Yang et al., 2019; Cai et al., 2020; Zhou et al., 2021), but the 3D data needed is relatively scarce compared to plentiful 2D images. Our approach learns 3D structure using only a 2D diffusion model trained on images, and sidesteps this issue. GANs can learn controllable 3D generators from photographs of a single object category, by placing an adversarial loss on 2D image renderings of the output 3D object or scene (Henzler et al., 2019; Nguyen-Phuoc et al., 2019; Or-El et al., 2022). Though these approaches have yielded promising results on specific object categories such as faces, they have not yet been demonstrated to support arbitrary text. Neural Radiance Fields, or Ne RF (Mildenhall et al., 2020) are an approach towards inverse rendering in which a volumetric raytracer is combined with a neural mapping from spatial coordinates to color and volumetric density. Ne RF has become a critical tool for neural inverse rendering (Tewari et al., 2022). Originally, Ne RF was found to work well for classic 3D reconstruction tasks: many images of a scene are provided as input to a model, and a Ne RF is optimized to recover the geometry of that specific scene, which allows for novel views of that scene from unobserved angles to be synthesized. Published as a conference paper at ICLR 2023 an orangutan making a clay bowl on a throwing wheel* a raccoon astronaut holding his helmet a blue jay standing on a large basket of rainbow macarons* a corgi taking a selfie* a table with dim sum on it a lion reading the newspaper* Michelangelo style statue of dog reading news on a cellphone a tiger dressed as a doctor* a steam engine train, high resolution* a frog wearing a sweater* a humanoid robot playing the cello* Sydney opera house, aerial view an all-utility vehicle driving across a stream a chimpanzee dressed like Henry VIII king of England* a baby bunny sitting on top of a stack of pancakes a sliced loaf of fresh bread a bulldozer clearing away a pile of snow* a classic Packard car* zoomed out view of Tower Bridge made out of gingerbread and candy a robot and dinosaur playing chess, high resolution* a squirrel gesturing in front of an easel showing colorful pie charts Figure 1: Dream Fusion uses a pretrained text-to-image diffusion model to generate realistic 3D models from text prompts. Rendered 3D models are presented from two views, with textureless renders and normals to the right. See dreamfusionpaper.github.io for videos of these results. Symbols indicate the following prompt prefixes which we found helped to improve the quality and realism: * a DSLR photo of... a zoomed out DSLR photo of... a wide angle zoomed out DSLR photo of... Published as a conference paper at ICLR 2023 Many 3D generative approaches have found success in incorporating Ne RF-like models in the generative process (Schwarz et al., 2020; Chan et al., 2021b;a; Gu et al., 2021; Liu et al., 2022). One such approach is Dream Fields (Jain et al., 2022), which uses the frozen image-text joint embedding models from CLIP (Radford et al., 2021) and an optimization-based approach to train Ne RFs. This work showed that pretrained 2D image-text models may be used for 3D synthesis, though 3D objects produced by this approach tend to lack realism and accuracy. CLIP has been used to guide other approaches based on voxel grids and meshes (Sanghi et al., 2022; Jetchev, 2021; Wang et al., 2022). We adopt a similar approach to Dream Fields, but replace CLIP with a loss derived from distillation of a 2D diffusion model. Our loss is based on probabilty density distillation, minimizing the KL divergence between a family of Gaussian distribution with shared means based on the forward process of diffusion and the score functions learned by the pretrained diffusion model. The resulting Score Distillation Sampling (SDS) method enables sampling via optimization in differentiable image parameterizations. By combining SDS with a Ne RF variant tailored to this 3D generation task, Dream Fusion generates high-fidelity coherent 3D objects and scenes for a diverse set of user-provided text prompts. 2 DIFFUSION MODELS AND SCORE DISTILLATION SAMPLING Diffusion models are latent-variable generative models that learn to gradually transform a sample from a tractable noise distribution towards a data distribution (Sohl-Dickstein et al., 2015; Ho et al., 2020). Diffusion models consist of a forward process q that slowly removes structure from data x by adding noise, and a reverse process or generative model p that slowly adds structure starting from noise zt. The forward process is typically a Gaussian distribution that transitions from the previous less noisy latent at timestep t to a noisier latent at timestep t + 1. We can compute the marginal distribution of the latent variables at timestep t given an initial datapoint x by integrating out intermediate timesteps: q(zt|x) = N(αtx, σ2 t I). The marginals integrating out the data density q(x) are q(zt) = R q(zt|x)q(x) dx, and correspond to smoothed versions of the data distribution. The coefficients αt and σt are chosen such that q(zt) is close to the data density at the start of the process (σ0 0) and close to Gaussian at the end of the forward process (σT 1), with α2 t = 1 σ2 t chosen to preserve variance (Kingma et al., 2021; Song et al., 2021). The generative model p is trained to slowly add structure starting from random noise p(z T ) = N(0, I) with transitions pϕ(zt 1|zt). Theoretically, with enough timesteps, the optimal reverse process step is also Gaussian and related to an optimal MSE denoiser (Sohl-Dickstein et al., 2015). Transitions are typically parameterized as pϕ(zt 1|zt) = q(zt 1|zt, x = ˆxϕ(zt; t)) where q(zt 1|zt, x) is a posterior distribution derived from the forward process and ˆxϕ(zt; t) is a learned approximation of the optimal denoiser. Instead of directly predicting ˆxϕ, Ho et al. (2020) trains an image-to-image U-Net ϵϕ(zt; t) that predicts the noise content of the latent zt: E[x|zt] ˆxϕ(zt; t) = (zt σtϵϕ(zt; t)) /αt. The predicted noise can be related to a predicted score function for the smoothed density ztlog p(zt) through Tweedie s formula (Robbins, 1992): ϵϕ(zt; t)= σtsϕ(zt; t). Training the generative model with a (weighted) evidence lower bound (ELBO) simplifies to a weighted denoising score matching objective for parameters ϕ (Ho et al., 2020; Kingma et al., 2021): LDiff(ϕ, x) = Et U(0,1),ϵ N(0,I) w(t) ϵϕ(αtx + σtϵ; t) ϵ 2 2 , (1) where w(t) is a weighting function that depends on the timestep t. Diffusion model training can thereby be viewed as either learning a latent-variable model (Sohl-Dickstein et al., 2015; Ho et al., 2020), or learning a sequence of score functions corresponding to noisier versions of the data (Vincent, 2011; Song & Ermon, 2019; Song et al., 2021). We will use pϕ(zt; t) to denote the approximate marginal distribution whose score function is given by sϕ(zt; t) = ϵϕ(zt; t)/σt. Our work builds on text-to-image diffusion models that learn ϵϕ(zt; t, y) conditioned on text embeddings y (Saharia et al., 2022; Ramesh et al., 2022; Nichol et al., 2022). These models use classifier-free guidance (CFG, Ho & Salimans, 2022), which jointly learns an unconditional model to enable higher quality generation via a guidance scale parameter ω: ˆϵϕ(zt; y, t) = (1 + ω)ϵϕ(zt; y, t) ωϵϕ(zt; t). CFG alters the score function to prefer regions where the ratio of the conditional density to the unconditional density is large. In practice, setting ω > 0 improves sample fidelity at the cost of diversity. We use ˆϵ and ˆp throughout to denote the guided version of the noise prediction and marginal distribution. Published as a conference paper at ICLR 2023 Updates parameters with SGD: Updates sample in pixel space: Score Distillation Sampling Ancestral Sampling Figure 2: Comparison of 2D sampling methods from a text-to-image diffusion model with text a photo of a tree frog wearing a sweater. For score distillation sampling, as an example we use an image generator that restricts images to be symmetric by having x = (flip(θ), θ). 2.1 HOW CAN WE SAMPLE IN PARAMETER SPACE, NOT PIXEL SPACE? Existing approaches for sampling from diffusion models generate a sample that is the same type and dimensionality as the observed data the model was trained on (Song et al., 2021; 2020). Though conditional diffusion sampling enables quite a bit of flexibility (e.g. inpainting), diffusion models trained on pixels have traditionally been used to sample only pixels. We are not interested in sampling pixels; we instead want to create 3D models that look like good images when rendered from random angles. Such models can be specified as a differentiable image parameterization (DIP, Mordvintsev et al., 2018), where a differentiable generator g transforms parameters θ to create an image x = g(θ). DIPs allow us to express constraints, optimize in more compact spaces (e.g. arbitrary resolution coordinate-based MLPs), or leverage more powerful optimization algorithms for traversing pixel space. For 3D, we let θ be parameters of a 3D volume and g a volumetric renderer. To learn these parameters, we require a loss function that can be applied to diffusion models. Our approach leverages the structure of diffusion models to enable tractable sampling via optimization a loss function that, when minimized, yields a sample. We optimize over parameters θ such that x = g(θ) looks like a sample from the frozen diffusion model. To perform this optimization, we need a differentiable loss function where plausible images have low loss, and implausible images have high loss, in a similar style to Deep Dream (Mordvintsev et al., 2015). We first investigated reusing the diffusion training loss (Eqn. 1) to find modes of the learned conditional density p(x|y). While modes of generative models in high dimensions are often far from typical samples (Nalisnick et al., 2018), the multiscale nature of diffusion model training may help to avoid these pathologies. Minimizing the diffusion training loss with respect to a generated datapoint x = g(θ) gives θ = arg minθ LDiff(ϕ, x = g(θ)). In practice, we found that this loss function did not produce realistic samples even when using an identity DIP where x = θ. Concurrent work from Graikos et al. (2022) shows that this method can be made to work with carefully chosen timestep schedules, but we found this objective brittle and its timestep schedules challenging to tune. To understand the difficulties of this approach, consider the gradient of LDiff: θLDiff(ϕ, x = g(θ)) = Et,ϵ w(t) (ˆϵϕ(zt; y, t) ϵ) | {z } Noise Residual ˆϵϕ(zt; y, t) zt | {z } U-Net Jacobian θ |{z} Generator Jacobian where we absorb the constant αt I = zt/ x into w(t), and use the classifier-free-guided ˆϵϕ. In practice, the U-Net Jacobian term is expensive to compute (requires backpropagating through the diffusion model U-Net), and poorly conditioned for small noise levels as it is trained to approximate the scaled Hessian of the marginal density. We found that omitting the U-Net Jacobian term leads to an effective gradient for optimizing DIPs with diffusion models: θLSDS(ϕ, x = g(θ)) Et,ϵ w(t) (ˆϵϕ(zt; y, t) ϵ) x Intuitively, this loss perturbs x with a random amount of noise corresponding to the timestep t, and estimates an update direction that follows the score function of the diffusion model to move to a higher density region. While this gradient for learning DIPs with diffusion models may appear ad hoc, in Appendix A.4 we show that it is the gradient of a weighted probability density distillation loss (van den Oord et al., 2018) using the learned score functions from the diffusion model: θLSDS(ϕ, x = g(θ)) = θEt [σt/αtw(t)KL(q(zt|g(θ); y, t) pϕ(zt; y, t))] . (4) Published as a conference paper at ICLR 2023 n Vqu5t XZ7Vqlf5XEUy RE5Jqf EJRek Tm5Igz QJI4o8k1fy Zj1a L9a79TFv LVj5z CH5A+vz B0Rrkw Q=shading OZ4Oil Et YBEODKk WLOJIQh Lana Fe IQkwtp EWDUhu Isn L5NOo+5e1hv3F7Xm TRl HBRy DE3AOXHAFmu AOt EAb YPAEXs Abe Ler Vfrw/qcla5YZc8R+APr6xd1Wp2Rn "a DSLR photo of a peacock on a surfboard" Transformer Backpropagate onto Ne RF weights zt, t U(0, 1) P(light) ˆ φ(zt|y; t) ˆ φ(zt|y; t) ˆxφ(zt|y; t) Figure 3: Dream Fusion generates 3D objects from a natural language caption such as a DSLR photo of a peacock on a surfboard. The scene is represented by a Neural Radiance Field that is randomly initialized and trained from scratch for each caption. Our Ne RF parameterizes volumetric density and albedo (color) with an MLP. We render the Ne RF from a random camera, using normals computed from gradients of the density to shade the scene with a random lighting direction. Shading reveals geometric details that are ambiguous from a single viewpoint. To compute parameter updates, Dream Fusion diffuses the rendering and reconstructs it with a (frozen) conditional Imagen model to predict the injected noise ˆϵϕ(zt|y; t). This contains structure that should improve fidelity, but is high variance. Subtracting the injected noise produces a low variance update direction stopgrad[ˆϵϕ ϵ] that is backpropagated through the rendering process to update the Ne RF MLP parameters. We name our sampling approach Score Distillation Sampling (SDS) as it is related to distillation, but uses score functions instead of densities. We refer to it as a sampler because the noise in the variational family q(zt| . . .) disappears as t 0 and the mean parameter of the variational distribution g(θ) becomes the sample of interest. Our loss is easy to implement (see Fig. 8), and relatively robust to the choice of weighting w(t). Since the diffusion model directly predicts the update direction, we do not need to backpropagate through the diffusion model; the model simply acts like an efficient, frozen critic that predicts image-space edits. Given the mode-seeking nature of LSDS, it may be unclear if minimizing this loss will produce good samples. In Fig. 2, we demonstrate that SDS can generate constrained images with reasonable quality. Empirically, we found that setting the guidance weight ω to a large value for classifier-free guidance improves quality (Appendix Table 10). SDS produces detail comparable to ancestral sampling, but enables new transfer learning applications because it operates in parameter space. 3 THE DREAMFUSION ALGORITHM Now that we have demonstrated how a diffusion model can be used as a loss within a generic continuous optimization problem to generate samples, we will construct our specific algorithm that allows us to generate 3D assets from text. For the diffusion model, we use the Imagen model from Saharia et al. (2022), which has been trained to synthesize images from text. We only use the 64 64 base model (not the super-resolution cascade for generating higher-resolution images), and use this pretrained model as-is with no modifications. To synthesize a scene from text, we initialize a Ne RF-like model with random weights, then repeatedly render views of that Ne RF from random camera positions and angles, using these renderings as the input to our score distillation loss function that wraps around Imagen. As we will demonstrate, simple gradient descent with this approach eventually results in a 3D model (parameterized as a Ne RF) that resembles the text. See Fig. 3 for an overview of our approach. 3.1 NEURAL RENDERING OF A 3D MODEL Ne RF is a technique for neural inverse rendering that consists of a volumetric raytracer and a multilayer perceptron (MLP). Rendering an image from a Ne RF is done by casting a ray for each pixel from a camera s center of projection through the pixel s location in the image plane and out into the world. Sampled 3D points µ along each ray are then passed through an MLP, which produces 4 Published as a conference paper at ICLR 2023 scalar values as output: a volumetric density τ (how opaque the scene geometry at that 3D coordinate is) and an RGB color c. These densities and colors are then alpha-composited from the back of the ray towards the camera, producing the final rendered RGB value for the pixel: C = P iwici , wi = αi Q j 60 , we append overhead view. For ϕcam 60 , we use a weighted combination of the text embeddings for appending front view, side view, or back view depending on the value of the azimuth angle θcam (see App. A.2 for details). We use the pretrained 64 64 base text-to-image model from Saharia et al. (2022). This model was trained on large-scale web-image-text data, and is conditioned on T5-XXL text embeddings (Raffel et al., 2020). We use a weighting function of w(t) = σ2 t , but found that a uniform weighting performed similarly. We sample t U(0.02, 0.98), avoiding very high and low noise levels due to numerical instabilities. For classifier-free guidance, we set ω = 100, finding that higher guidance weights give improved sample quality. This is much larger than image sampling methods, and is likely required due to the mode-seeking nature of our objective which results in oversmoothing at small guidance weights (see Appendix Table. 10). Given the rendered image and sampled timestep t, we sample noise ϵ and compute the gradient of the Ne RF parameters according to Eqn. 3. 4. Optimization. Our 3D scenes are optimized on a TPUv4 machine with 4 chips. Each chip renders a separate view and evaluates the diffusion U-Net with per-device batch size of 1. We optimize for 15,000 iterations which takes around 1.5 hours. Compute time is split evenly between rendering the Ne RF and evaluating the diffusion model. Parameters are optimized using the Distributed Shampoo optimizer (Anil et al., 2020). See Appendix A.2 for optimization settings. Published as a conference paper at ICLR 2023 Table 1: Evaluating the coherence of Dream Fusion generations with their caption using different CLIP retrieval models. We compare to the ground-truth MS-COCO images in the object-centric subset of Jain et al. (2022) as well as Khalid et al. (2022). Evaluated with only 1 seed per prompt. Metrics shown in parentheses may be overfit, as the same CLIP model is used during training and eval. Method R-Precision CLIP-B/32 CLIP-B/16 CLIP-L/14 Color Geo Color Geo Color Geo GT Images 77.1 79.1 Dream Fields 68.3 74.2 (reimpl.) 78.6 1.3 (99.9) (0.8) 82.9 1.4 CLIP-Mesh 67.8 75.8 74.5 Dream Fusion (base) 81.5 0.02 84.7 0.03 88.4 0.05 Dream Fusion 75.1 42.5 77.5 46.6 79.7 58.5 matte painting of a castle made of cheesecake surrounded by a moat made of ice cream a hamburger a vase with pink flowers Fields (reimpl.) Fusion (Ours) Figure 5: Qualitative comparison with baselines. 4 EXPERIMENTS We evaluate the ability of Dream Fusion to generate coherent 3D scenes from a variety of text prompts. We compare to existing zero-shot text-to-3D generative models, identify the key components of our model that enable accurate 3D geometry, and explore the qualitative capabilities of Dream Fusion such as the compositional generation shown in Figure 4. 3D reconstruction tasks are typically evaluated using reference-based metrics which compares recovered geometry to some ground truth. The view-synthesis literature often uses PSNR to compare rendered views with a held-out photograph. These reference-based metrics are difficult to apply to zero-shot text-to-3D generation, as there is no true 3D scene corresponding to our text prompts. Following Jain et al. (2022), we evaluate the CLIP R-Precision (Park et al., 2021a), an automated metric for the consistency of rendered images with respect to the input caption. The R-Precision is the accuracy with which CLIP (Radford et al., 2021) retrieves the correct caption among a set of distractors given a rendering of the scene. We use the 153 prompts from the object-centric COCO validation subset of Dream Fields. We also measure CLIP R-Precision on textureless renders to evaluate geometry since we found existing metrics do not capture the quality of the geometry, often yielding high values when texture is painted on flat geometry. Table 1 reports CLIP R-Precision for Dream Fusion and several baselines. These include Dream Fields, CLIP-Mesh (which optimizes a mesh with CLIP), Dream Fusion without any view augmentations, view-dependent prompts, or shading (base) and an oracle that evaluates the original captioned image pairs in MS-COCO. We also compare against an enhanced reimplementation of Dream Fields where we use our own 3D representation (Sec. 3.1). Since this evaluation is based on CLIP, Dream Fields and CLIP-Mesh have an unfair advantage as they use CLIP during training. Despite this, Dream Fusion outperforms both baselines on color images, and approaches the performance of ground truth images. While our implementation of Dream Fields performs nearly at chance when evaluating geometry (Geo) with textureless renders, Dream Fusion is consistent with captions 58.5% of the time. See Appendix A.3 for more details of the experimental setup. Ablations. Fig. 6 shows CLIP R-Precision for a simplified Dream Fusion ablation and progressively adds in optimization choices: a large ranges of viewpoints (View Aug), view-dependent prompts (View Dep), optimizing illuminated renders in addition to unlit albedo color renders (Lighting), and optimizing textureless shaded geometry images (Textureless). We measure R-Precision on the albedo render as in baselines (left), the full shaded render (middle) and the textureless render (right) to check geometric quality. Geometry significantly improves with each of these choices and full renderings improve by +12.5%. Fig. 6 shows qualitative results for the ablation. This ablation also highlights how the albedo renders can be deceiving: our base model achieves the highest score, but exhibits poor geometry (the dog has multiple heads). Recovering accurate geometry requires view-dependent prompts, illumination and textureless renders. Published as a conference paper at ICLR 2023 Albedo Shaded Textureless Render Type R-Precision +View Aug (i) +View Dep (ii) +Lighting (iii) +Textureless (iv) Figure 6: An ablation study of Dream Fusion. Left: We evaluate components of our unlit renderings on albedo, full shaded and illuminated renderings and textureless illuminated geometry using CLIP L/14 on object-centric COCO. Right: visualizations of the impact of each ablation for A bulldog is wearing a black pirate hat. on albedo (top), shaded (middle), and textureless renderings (bottom). The base method (i) without view-dependent prompts results in a multi-faced dog with flat geometry. Adding in view-dependent prompts (ii) improves geometry, but the surfaces are highly non-smooth and result in poor shaded renders. Introducing lighting (iii) improves geometry but darker areas (e.g. the hat) remain non-smooth. Rendering without color (iv) helps to smooth the geometry, but also causes some color details like the skull and crossbones to be carved into the geometry. 5 DISCUSSION We have presented Dream Fusion, an effective technique for text-to-3D synthesis for a wide range of text prompts. Dream Fusion works by transferring scalable, high-quality 2D image diffusion models to the 3D domain through our use of a novel Score Distillation Sampling approach and a novel Ne RF-like rendering engine. Dream Fusion does not require 3D or multi-view training data, and uses only a pre-trained 2D diffusion model (trained on only 2D images) to perform 3D synthesis. Though Dream Fusion produces compelling results and outperforms prior work on this task, it still has several limitations. SDS is not a perfect loss function when applied to image sampling, and often produces oversaturated and oversmoothed results relative to ancestral sampling. While dynamic thresholding (Saharia et al., 2022) partially ameliorates this issue when applying SDS to images, it did not resolve this issue in a Ne RF context. Additionally, 2D image samples produced using SDS tend to lack diversity compared to ancestral sampling, and our 3D results exhibit few differences across random seeds. This may be fundamental to our use of reverse KL divergence, which has been previously noted to have mode-seeking properties in the context of variational inference and probability density distillation. Dream Fusion uses the 64 64 Imagen model, and as such our 3D synthesized models tend to lack fine details. Using a higher-resolution diffusion model and a bigger Ne RF would presumably address this, but synthesis would become impractically slow. Hopefully improvements in the efficiency of diffusion and neural rendering will enable tractable 3D synthesis at high resolution in the future. The problem of 3D reconstruction from 2D observations is widely understood to be highly ill-posed, and this ambiguity has consequences in the context of 3D synthesis. Fundamentally, our task is hard for the same reason that inverse rendering is hard: there exist many possible 3D worlds that result in identical 2D images. The optimization landscape of our task is therefore highly non-convex, and many of the details of this work are designed specifically to sidestep these local minima. But despite our best efforts we still sometimes observe local minima, such as 3D reconstructions where all scene content is painted onto a single flat surface. Though the techniques presented in this work are effective, this task of lifting 2D observations into a 3D world is inherently ambiguous, and may benefit from more robust 3D priors. Published as a conference paper at ICLR 2023 ETHICS STATEMENT Generative models for synthesizing images carry with them several ethical concerns, and these concerns are shared by (or perhaps exacerbated in) 3D generative models such as ours. Because Dream Fusion uses the Imagen diffusion model as a prior, it inherits any problematic biases and limitations that Imagen may have. While Imagen s dataset was partially filtered, the LAION400M (Schuhmann et al., 2022) subset of its data was found to contain undesirable images (Birhane et al., 2021). Imagen is also conditioned on features from a pretrained large language model, which itself may have unwanted biases. It is important to be careful about the contents of datasets that are used in text-to-image and image-to-3D models so as to not propagate hateful media. Generative models, in the hands of bad actors, could be used to generate disinformation. Disinformation in the form of 3D objects may be more convincing than 2D images (though renderings of our synthesized 3D models are less realistic than the state of the art in 2D image synthesis). Generative models such as ours may have the potential to displace creative workers via automation. That said, these tools may also enable growth and improve accessibility for the creative industry. REPRODUCIBILITY STATEMENT The mip-Ne RF 360 model that we build upon is publicly available through the Multi Ne RF code repository (Mildenhall et al., 2022). While the Imagen diffusion model is not publicly available, other conditional diffusion models may produce similar results with the Dream Fusion algorithm. To aid reproducibility, we have included a schematic overview of the algorithm in Figure 3, pseudocode for Score Distillation Sampling in Figure 8, hyperparameters in Appendix A.2, and additional evaluation setup details in Appendix A.3. Derivations for our loss are also included in Appendix A.4. Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, and Yoram Singer. Scalable second order optimization for deep learning, 2020. URL https://arxiv.org/abs/2002.09018. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. ar Xiv:1607.06450, 2016. Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-Ne RF: A multiscale representation for anti-aliasing neural radiance fields. ICCV, 2021. Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Mip-Ne RF 360: Unbounded anti-aliased neural radiance fields. CVPR, 2022. Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Miloˇs Haˇsan, Yannick Hold-Geoffroy, David Kriegman, and Ravi Ramamoorthi. Neural reflectance fields for appearance acquisition. ar Xiv:2008.03824, 2020. Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. Multimodal datasets: misogyny, pornography, and malignant stereotypes. ar Xiv:2110.01963, 2021. Mark Boss, Raphael Braun, Varun Jampani, Jonathan T. Barron, Ce Liu, and Hendrik P.A. Lensch. Ne RD: Neural reflectance decomposition from image collections. ICCV, 2021. Ruojin Cai, Guandao Yang, Hadar Averbuch-Elor, Zekun Hao, Serge Belongie, Noah Snavely, and Bharath Hariharan. Learning gradient fields for shape generation. ECCV, 2020. Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. ar Xiv, 2021a. Eric R. Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-GAN: Periodic implicit generative adversarial networks for 3d-aware image synthesis. CVPR, 2021b. Published as a conference paper at ICLR 2023 Kevin Chen, Christopher B Choy, Manolis Savva, Angel X Chang, Thomas Funkhouser, and Silvio Savarese. Text2shape: Generating shapes from natural language by learning joint embeddings. ar Xiv:1803.08495, 2018. Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. Wavegrad: Estimating gradients for waveform generation. ar Xiv:2009.00713, 2020. Alexandros Graikos, Nikolay Malkin, Nebojsa Jojic, and Dimitris Samaras. Diffusion models as plug-and-play priors. ar Xiv:2206.09012, 2022. Will Grathwohl, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud, and Richard Zemel. Learning the stein discrepancy for training and evaluating energy-based models without sampling. ICML, 2020. Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Style Ne RF: A style-based 3d-aware generator for high-resolution image synthesis, 2021. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CVPR, 2016. Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs). ar Xiv:1606.08415, 2016. Philipp Henzler, Niloy J Mitra, , and Tobias Ritschel. Escaping Plato s Cave: 3D shape from adversarial rendering. ICCV, 2019. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. ar Xiv:2207.12598, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Neur IPS, 2020. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. ar Xiv:2204.03458, 2022. Fangzhou Hong, Mingyuan Zhang, Liang Pan, Zhongang Cai, Lei Yang, and Ziwei Liu. Avatar CLIP: Zero-shot text-driven generation and animation of 3D avatars. ACM Transactions on Graphics (TOG), 2022. Tianyang Hu, Zixiang Chen, Hanxi Sun, Jincheng Bai, Mao Ye, and Guang Cheng. Stein neural sampler. ar Xiv:1810.03545, 2018. Chin-Wei Huang, Faruk Ahmed, Kundan Kumar, Alexandre Lacoste, and Aaron C. Courville. Probability distillation: A caveat and alternatives. UAI, 2019. Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. CVPR, 2022. Nikolay Jetchev. Clip Matrix: Text-controlled creation of 3d textured meshes. ar Xiv:2109.12922, 2021. Nasir Mohammad Khalid, Tianhao Xie, Eugene Belilovsky, and Popa Tiberiu. CLIP-Mesh: Generating textured meshes from text using pretrained image-text models. SIGGRAPH Asia 2022 Conference Papers, 2022. Diederik P Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. Neur IPS, 2021. Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis. ICLR, 2021. Johann Heinrich Lambert. Photometria sive de mensura et gradibus luminis, colorum et umbrae. sumptibus vidvae E. Klett, typis CP Detleffsen, 1760. Jae Hyun Lim, Aaron C. Courville, Christopher J. Pal, and Chin-Wei Huang. AR-DAE: towards unbiased neural entropy gradient estimation. ICML, 2020. Published as a conference paper at ICLR 2023 Zhengzhe Liu, Yi Wang, Xiaojuan Qi, and Chi-Wing Fu. Towards implicit text-guided 3d shape generation. CVPR, 2022. Oscar Michel, Roi Bar-On, Richard Liu, Sagie Benaim, and Rana Hanocka. Text2Mesh: Text-driven neural stylization for meshes. ar Xiv:2112.03221, 2021. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Ne RF: Representing scenes as neural radiance fields for view synthesis. ECCV, 2020. Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman, Ricardo Martin-Brualla, and Jonathan T. Barron. Multi Ne RF: A Code Release for Mip-Ne RF 360, Ref-Ne RF, and Raw Ne RF, 2022. URL https://github.com/google-research/multinerf. Alexander Mordvintsev, Christopher Olah, and Mike Tyka. Inceptionism: Going deeper into neural networks, 2015. URL https://research.googleblog.com/2015/06/ inceptionism-going-deeper-into-neural.html. Alexander Mordvintsev, Nicola Pezzotti, Ludwig Schubert, and Chris Olah. Differentiable image parameterizations. Distill, 2018. doi: 10.23915/distill.00012. Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don t know?, 2018. URL https://arxiv.org/abs/ 1810.09136. Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Holo GAN: Unsupervised learning of 3d representations from natural images. ICCV, 2019. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mc Grew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In ICML, 2022. Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher Shlizerman. Style SDF: High-Resolution 3D-Consistent Image and Geometry Generation. CVPR, 2022. Xingang Pan, Ayush Tewari, Lingjie Liu, and Christian Theobalt. Gan2x: Non-lambertian inverse rendering of image gans. 3DV, 2022. Dong Huk Park, Samaneh Azadi, Xihui Liu, Trevor Darrell, and Anna Rohrbach. Benchmark for compositional text-to-image synthesis. In Neur IPS Datasets and Benchmarks, 2021a. Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. ICCV, 2021b. Wei Ping, Kainan Peng, and Jitong Chen. Clari Net: Parallel wave generation in end-to-end text-tospeech. ICLR, 2019. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. ICML, 2021. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 2020. Ravi Ramamoorthi and Pat Hanrahan. A signal-processing framework for inverse rendering. SIGGRAPH, 2001. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. ICML, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical textconditional image generation with clip latents, 2022. URL https://arxiv.org/abs/2204. 06125. Published as a conference paper at ICLR 2023 Herbert E. Robbins. An Empirical Bayes Approach to Statistics. Springer New York, 1992. Geoffrey Roeder, Yuhuai Wu, and David Duvenaud. Sticking the landing: Simple, lower-variance gradient estimators for variational inference, 2017. URL https://arxiv.org/abs/1703.09194. Chitwan Saharia, William Chan, Huiwen Chang, Chris A. Lee, Jonathan Ho, Tim Salimans, David J. Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion models, 2021a. URL https: //arxiv.org/abs/2111.05826. Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J. Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement, 2021b. URL https://arxiv.org/abs/2104. 07636. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding. ar Xiv:2205.11487, 2022. Aditya Sanghi, Hang Chu, Joseph G Lambourne, Ye Wang, Chin-Yi Cheng, and Marco Fumero. CLIP-Forge: Towards zero-shot text-to-shape generation. CVPR, 2022. Christoph Schuhmann, Romain Beaumont, Cade W Gordon, Ross Wightman, mehdi cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Patrick Schramowski, Srivatsa R Kundurthy, Katherine Crowson, Richard Vencu, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. LAION-5b: An open large-scale dataset for training next generation image-text models. Neur IPS Datasets and Benchmarks Track, 2022. Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. GRAF: Generative radiance fields for 3d-aware image synthesis. Neur IPS, 2020. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. ICML, 2015. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. Co RR, abs/2010.02502, 2020. URL https://arxiv.org/abs/2010.02502. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Neur IPS, 2019. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. ICLR, 2021. Pratul P Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, and Jonathan T Barron. Ne RV: Neural reflectance and visibility fields for relighting and view synthesis. CVPR, 2021. Ayush Tewari, Justus Thies, Ben Mildenhall, Pratul Srinivasan, Edgar Tretschk, W Yifan, Christoph Lassner, Vincent Sitzmann, Ricardo Martin-Brualla, Stephen Lombardi, et al. Advances in neural rendering. Computer Graphics Forum, 2022. A aron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C. Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury, Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen King, Tom Walters, Dan Belov, and Demis Hassabis. Parallel Wave Net: Fast high-fidelity speech synthesis. ICML, 2018. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Neur IPS, 2017. Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, and Pratul P. Srinivasan. Ref-Ne RF: Structured view-dependent appearance for neural radiance fields. CVPR, 2022. Pascal Vincent. A connection between score matching and denoising autoencoders. Neural computation, 2011. Published as a conference paper at ICLR 2023 Can Wang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. CLIP-Ne RF: Text-andimage driven manipulation of neural radiance fields. CVPR, 2022. Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. Neur IPS, 2016. Guandao Yang, Xun Huang, Zekun Hao, Ming-Yu Liu, Serge Belongie, and Bharath Hariharan. Pointflow: 3d point cloud generation with continuous normalizing flows. ICCV, 2019. Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, and Yaron Lipman. Multiview neural surface reconstruction by disentangling geometry and appearance. Neur IPS, 2020. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, and Yonghui Wu. Scaling autoregressive models for content-rich text-to-image generation. ar Xiv:2206.10789, 2022. Linqi Zhou, Yilun Du, and Jiajun Wu. 3D shape generation and completion through point-voxel diffusion. ICCV, 2021. Published as a conference paper at ICLR 2023 A.1 PSEUDOCODE FOR ANCESTRAL SAMPLING AND OUR SCORE DISTILLATION SAMPLING. z_t = random.normal(img_shape) for t in linspace(tmax, tmin, nstep): epshat_t = diffusion_model.epshat(z_t, y, t) # Score function evaluation. if t > tmin: eps = random.normal(img_shape) z_t = ddpm_update(z_t, epshat_t, eps) # 1 iteration, decreases noise level. x = diffusion_model.xhat(z_t, epshat_t, t_min) # Tweedie s formula: denoise the last step. return x Figure 7: Pseudocode for ancestral sampling from DDPM where y is the optional conditioning signal e.g. a caption. Typically, tmax = 1 and tmin = 1/nstep. Timestep t monotonically decreases. params = generator.init() opt_state = optimizer.init(params) diffusion_model = diffusion.load_model() for nstep in iterations: t = random.uniform(0., 1.) alpha_t, sigma_t = diffusion_model.get_coeffs(t) eps = random.normal(img_shape) x = generator(params, ...) # Get an image observation. z_t = alpha_t * x + sigma_t * eps # Diffuse observation. epshat_t = diffusion_model.epshat(z_t, y, t) # Score function evaluation. g = grad(weight(t) * dot(stopgradient[epshat_t - eps], x), params) params, opt_state = optimizer.update(g, opt_state) # Update params with optimizer. return params Figure 8: Pseudocode for Score Distillation Sampling with an application-specific generator that defines a differentiable mapping from parameters to images. The gradient g is computed without backpropagating through the diffusion model s U-Net. We used the stopgradient operator to express the loss, but the parameter update can also be easily computed with an explicit VJP: g = matmul(weight(t) * (-epshat t - eps), grad(x, params)). A.2 NERF DETAILS AND TRAINING HYPERPARAMETERS Our model builds upon mip-Ne RF 360 (Barron et al., 2022) (starting from the publicly available implementation 2022), which is an improved version of Ne RF (Mildenhall et al., 2020). The main modification this model makes to Ne RF is in how 3D point information is passed to the Ne RF MLP. In Ne RF, each 3D input point is mapped to a higher dimensional space using a sinusoidal positional encoding function (Vaswani et al., 2017). In mip-Ne RF, this is replaced by an integrated positional encoding that accounts for the width of the ray being rendered (based on its pixel footprint in the image plane) and the length of each interval [di, di+1] sampled along the ray (Barron et al., 2021). This allows each interval along a ray to be represented as a Gaussian distribution with mean µ and covariance matrix Σ that approximates the interval s 3D volume. Mip-Ne RF covariance annealing. As in mip-Ne RF, each mean µ is the 3D coordinate of the center of the ray interval, but unlike mip-Ne RF we do not use a covariance derived from camera geometry, but instead define each Σ as: Σ = λ2 ΣI3 (8) Where λΣ is a scale parameter that is linearly annealed from a large value to a small value during training. Representative settings are 5 10 2 and 2 10 3 for the initial and final values of λΣ, linearly annealed for the first 5k steps of optimization (out of 15k total). This coarse to fine Published as a conference paper at ICLR 2023 annealing of a scale parameter has a similar effect as the annealing used by Park et al. (2021b) but uses integrated positional encoding instead of traditional positional encoding. The underlying sinusoidal positional encoding function uses frequencies 20, 21, . . . , 2L 1, where we set L = 8. MLP architecture changes. Our Ne RF MLP consists of 5 Res Net blocks (He et al., 2016) with 128 hidden units, Swish/Si LU activation (Hendrycks & Gimpel, 2016), and layer normalization (Ba et al., 2016) between blocks. We use an exp activation to produce density τ and a sigmoid activation to produce RGB albedo ρ. Environment Map MLP. We use an additional MLP to create an environment map allowing the bounded Ne RF MLP to be rendered on a contextually-relevant and optimized background. This MLP takes a positionally-encoded ray direction as input, and outputs an RGB color. We use positional encodings with frequencies 20, . . . , 24, and an MLP with 3 hidden layers of 64 units. The output of this MLP is passed through a sigmoid activation to produce RGB values between 0 and 1. Shading hyperparameters. For the first 1k steps of optimization we set the ambient light color ℓa to 1 and the diffuse light color ℓρ to 0, which effectively disables diffuse shading. For the remaining steps we set ℓa = [0.1, 0.1, 0.1] and ℓρ = [0.9, 0.9, 0.9] with probability 0.75, otherwise ℓa = 1, ℓρ = 0, i.e. we use diffuse shading 75% of the time. When shading is on (ℓρ > 0), we choose textureless shading (ρ = 1) with probability 0.5. Spatial density bias. To aid in the early stages of optimization, we add a small blob of density around the origin to the output of the MLP. This helps focus scene content at the center of the 3D coordinate space, rather than directly next to the sampled cameras. We use a Gaussian PDF to parameterize the added density: τinit(µ) = λτ exp Representative settings are λτ = 5 for the scale parameter and στ = 0.2 for the width parameter. This density is added to the τ output of the Ne RF MLP. Additional camera and light sampling details. Uniformly sampling camera elevation ϕcam in angular space does not produce uniform samples over the surface of the sphere the area around the pole is oversampled. We found this bias to be helpful in practice, so we sample ϕcam from this biased distribution with probability 0.5, otherwise we sample from a true uniform-in-area distribution on a half-sphere. The sampled camera position is perturbed by a small uniform offset U( 0.1, 0.1)3. The look-at point is sampled from N(0, 0.2I) and the default up vector is perturbed by noise sampled from N(0, 0.02I). This noise acts as an additional augmentation, and ensures a wider diversity of viewpoints are seen during training. We separately sample the direction and norm of the light position vector ℓ. To sample the direction, we sample from N(pcam, I) where pcam is the camera position (this ensures that the point light usually ends up on the same side of the object as the camera). The norm ℓ is sampled from U(0.8, 1.5), while pcam U(1.0, 1.5). Regularizer hyperparameters. We use the orientation loss proposed by Ref-Ne RF (Verbin et al., 2022) to encourage normal vectors of the density field to face toward the camera when they are visible (so that the camera does not observe geometry that appears to face backwards when shaded). We place a stop-gradient on the rendering weights wi, which helps prevent unintended local minima where the generated object shrinks or disappears: Lorient = P istop grad(wi) max(0, ni v)2 , (10) where v is the direction of the ray (the viewing direction). We also apply a small regularization to the accumulated alpha value (opacity) along each ray: Lopacity = p (P i wi)2 + 0.01. This discourages optimization from unnecessarily filling in empty space, and improves foreground/background separation. For orientation loss Lorient, we find reasonable weights to lie in [10 1, 10 3]. If orientation loss is too high, surfaces become oversmoothed. In most experiments, we set the weight to 10 2. This weight Published as a conference paper at ICLR 2023 is annealed in starting from 10 4 over the first 5k (out of 15k) steps. For accumulated alpha loss Lopacity, we find reasonable weights to lie in [10 3, 5 10 3]. View-dependent prompting. We interpolate between front/side/back view prompt augmentations based on which quadrant contains the sampled azimuth θcam. We experimented with different ways of interpolating the text embeddings, but found that simply taking the text embedding closest to the sampled azimuth worked well. Optimizer. We use Distributed Shampoo (Anil et al., 2020) with β1 = 0.9, β2 = 0.9, exponent override = 2, block size = 128, graft type = SQRT N, ϵ = 10 6, and a linear warmup of learning rate over 3000 steps from 10 9 to 10 4 followed by cosine decay down to 10 6. We found this long warmup period to be helpful for improving the coherence of generated geometry. A.3 EXPERIMENTAL SETUP Our computation of R-Precision differs slightly from baselines. As mentioned, CLIP-based text-to-3D systems are prone to overfitting the evaluation CLIP R-Precision metric since the model used for training is similar to the evaluation model. To minimize this problem, Dream Fields (Jain et al., 2022) and CLIP-Mesh (Khalid et al., 2022) evaluate renderings at a single held out view at a 45 elevation, higher than is seen during training (maximum 30 ). Dream Fusion evaluates at 30 since it is not prone to this issue, but averages the metric over multiple azimuths to reduce variance. In our main results in Table 1, we evaluate all captions with 2 generation seeds unless otherwise noted. A.4 DERIVING THE SCORE DISTILLATION SAMPLING LOSS AND GRADIENTS The score distillation sampling loss LSDS presented in Eqn. 4 was inspired by work on probability density distillation (Huang et al., 2019; Ping et al., 2019; van den Oord et al., 2018). We use this loss to find modes of the score functions that are present across all noise levels in the diffusion process. Here we show how the gradient of this loss leads to the same update as optimizing the training loss LDiff, but without the term corresponding to the Jacobian of the diffusion U-Net. First, we consider gradients with respect to a single KL term: KL(q(zt|x = g(θ)) pϕ(zt|y)) = Eϵ [log q(zt|x = g(θ)) log pϕ(zt|y)] (11) θKL(q(zt|x = g(θ)) pϕ(zt|y)) = Eϵ h θ log q(zt|x = g(θ)) | {z } (A) θ log pϕ(zt|y) | {z } (B) The second term (B) can be related to ˆϵ by the chain rule, relying on sϕ(zt|y) zt log pϕ(zt|y): θ log pϕ(zt|y) = sϕ(zt|y) zt θ = αtsϕ(zt|y) x σt ˆϵϕ(zt|y) x The first term (A) is the gradient of the entropy of the forward process with respect to the mean parameter, holding the variance fixed. Because of the fixed variance, the entropy is constant for a given t and the total gradient of (A) is 0. However, we can still write out its gradient in terms of the score function (the gradient of the log probability with respect to parameters) and path derivative (the gradient of the log probability with respect to the sample): θ log q(zt|x) = log q(zt|x) x | {z } parameter score + log q(zt|x) x | {z } path derivative θ = 0. (14) Sticking-the-Landing (Roeder et al., 2017) shows that keeping the path derivative gradient while discarding the score function gradient can lead to reduced variance as the path derivative term can be correlated with other terms in the loss. Here, the other term (B) corresponds to a prediction of ϵ, which is certainly correlated with the RHS of equation 14. Putting these together, we can use a sticking-the-landing -style gradient of our loss by thinking of ϵ as a control variate for ˆϵ: θLSDS = Et,zt|x h w(t) σt αt θKL(q(zt|x = g(θ)) pϕ(zt|y)) i (15) w(t) (ˆϵ(zt|y) ϵ) x Published as a conference paper at ICLR 2023 In practice, we find that including ϵ in the gradient leads to lower-variance gradients that speed up optimization and can produce better final results. In related work, Graikos et al. (2022) also sample diffusion models by optimization, thereby allowing parameterized samples. Their divergence KL(h(x) pϕ(x|y)) reduces to the loss Eϵ,t ϵ ˆϵθ(zt|y; t) 2 2 log c(x, y). The squared error requires costly backpropagation through the diffusion model ˆϵθ, unlike SDS. DDPM-Pn P also uses an auxiliary classifier c, while we use CFG. A few other works have updates resembling Score Distillation Sampling for different applications. Gradients of the entropy of an implicit model have been estimated with an amortized score model at a single noise level (Lim et al., 2020), though that work does not use our control variate based on subtracting the noise from ˆϵ. SDS could also ease optimization by using multiple noise levels. GAN-like amortized samplers can be learned by minimizing the Stein discrepancy (Hu et al., 2018; Grathwohl et al., 2020), where the optimal critic resembles the difference of scores in our loss (Eqn. 3). A.5 IMPACT OF SEED AND GUIDANCE WEIGHT We find that large guidance weights are important for learning high-quality 3D models. Unlike image synthesis models that use guidance weights ω [5, 30], Dream Fusion uses weights up ω = 100, and works at even larger guidance scales without severe artifacts. This may be due to the constrained nature of the optimization problem: colors output by our MLP are bounded to [0, 1] by a sigmoid nonlinearity, whereas image samplers need clipping. We also find that our method does not yield large amounts of diversity across random seeds. This is likely due to the mode-seeking properties of LSDS combined with the fact that at high noise levels, the smoothed densities may not have many distinct modes. Understanding the interplay between guidance strength, diversity, and loss functions remains an important open direction for future research. A.6 EXTENDED QUALITATIVE COMPARISON WITH PRIOR WORK In Figure 11, we compare Dream Fusion to Dream Fields on several prompts from the object-centric MS-COCO validation set. Published as a conference paper at ICLR 2023 Guidance weight 25 50 75 100 250 Figure 9: A 2D sweep over guidance weights and random seeds for two different prompts ( a zoomed out DSLR photo of a robot couple fine dining and a DSLR photo of a chimpanzee dressed like Henry VIII king of England ). A blue jug in a garden filled with mud A pile of crab is seasoned and well cooked there is a very colorful kite that is in the air A couple of snowmen have been built in suburban backyards after a recently fallen snow The rotted out bed of a truck left in the woods CLIP (Vi T B/16) SDS (Imagen) Figure 10: Comparison of the CLIP loss from Dream Fields (top) and our SDS loss. For fair comparison of the loss functions in isolation, we use all of our proposed methods including viewdependent prompts, shading, optimizing untextured renderings, and regularizers, with the same 3D Ne RF representation. Qualitatively, 3D scenes generated with SDS (ours) are much more coherent than scenes generated with a CLIP loss. Published as a conference paper at ICLR 2023 Figure 11: Comparison between Dream Fusion and Dream Fields (Jain et al., 2022) on object-centric MS-COCO prompts. Odd rows are results from Dream Fusion, and even rows are results from Dream Fields. Dream Fusion exhibits increased global coherence and quality.