# causal_discovery_in_physical_systems_from_videos__e96ef623.pdf Causal Discovery in Physical Systems from Videos Yunzhu Li MIT CSAIL liyunzhu@mit.edu Antonio Torralba MIT CSAIL torralba@csail.mit.edu Animashree Anandkumar Caltech, Nvidia anima@caltech.edu Dieter Fox University of Washington, Nvidia fox@cs.washington.edu Animesh Garg University of Toronto, Vector Institute, Nvidia garg@cs.toronto.edu Causal discovery is at the core of human cognition. It enables us to reason about the environment and make counterfactual predictions about unseen scenarios that can vastly differ from our previous experiences. We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure. In particular, our goal is to discover the structural dependencies among environmental and object variables: inferring the type and strength of interactions that have a causal effect on the behavior of the dynamical system. Our model consists of (a) a perception module that extracts a semantically meaningful and temporally consistent keypoint representation from images, (b) an inference module for determining the graph distribution induced by the detected keypoints, and (c) a dynamics module that can predict the future by conditioning on the inferred graph. We assume access to different configurations and environmental conditions, i.e., data from unknown interventions on the underlying system; thus, we can hope to discover the correct underlying causal graph without explicit interventions. We evaluate our method in a planar multi-body interaction environment and scenarios involving fabrics of different shapes like shirts and pants. Experiments demonstrate that our model can correctly identify the interactions from a short sequence of images and make long-term future predictions. The causal structure assumed by the model also allows it to make counterfactual predictions and extrapolate to systems of unseen interaction graphs or graphs of various sizes. Please refer to our project page for additional results: https://yunzhuli.github.io/V-CDN/. 1 Introduction Causal understanding of the world around us is part of the bedrock of intelligence. This ability enables counterfactual reasoning, which often distinguishes algorithmic models from intelligent behavior in humans. This ability to discover latent causal mechanisms from data poses an important technical question towards building intelligent and interactive systems [1 3]. For instance, Figure 1 shows an example of a multi-body system. While the images may convey the identity and position of balls, the structural causal mechanism is latent. Each pair of balls is connected to each other through an edge (say a spring, a rigid rod, or be free). Further, each edge may have a set of hidden confounders, like the rest length of a spring or the rigid rod, that causally affect the physical interaction behavior. The underlying causal structure and governing functional mechanism may not be apparent if observations, such as images, are implicit measurements of ground-truth variables [4]. Furthermore, they can also vary across different configurations and scenarios within a domain. Hence, we need few-shot causal discovery algorithms purely from image data. In a special case, where the entities are all disconnected and the only interactions are of collision-type, there have been a number of models proposed to employ an object-centric formulation in recent 34th Conference on Neural Information Processing Systems (Neur IPS 2020), Vancouver, Canada. Figure 1: Causal discovery in physical systems from videos. The left figure shows balls, connected by invisible physical relations (shown in grey), moving around. Hidden confounding variables like edge type and edge parameters have a causal effect on the behavior of the underlying system. We humans can observe balls, infer the existence and variables on the edges between the balls, and predict the future. Similarly, in the cloth environment shown on the right, we can find a reduced-order representation by placing temporally consistent keypoints on the images and determining the causal relationships between them to reflect the cloth s topology. literature to directly predict the future from images [5 7]. In such cases, model discovery may not even be necessary given these solutions. However, these associative models crumble in the face of more complex stationary underlying generative structures such as different types of latent edges and edge mechanisms [8]. Moreover, they are insufficient to capture novel generative structures and make counterfactual predictions at test time. In this work, we aim to discover the structural causal model (SCM) to predict the future and reason over counterfactuals. To recover an SCM only from images, we need to first learn a compact state representation, infer a causal graph among these variables as well as identify hidden confounders, finally learn the functional mechanism of dynamics. This is a particularly challenging task in that we only have images and do not have explicit knowledge of the node variables. Furthermore, we neither assume access to ground truth causal graph, nor the hidden confounders and the dynamics that characterize the effect of the physical interactions. In order to tackle this end-to-end causal discovery problem in an unsupervised manner, we learn from datasets that contain episodes generated from different causal graphs but with a shared dynamics model. Summary of results. The main contributions of this work lie in the one-shot discovery of unseen causal mechanisms in new environments from partially observed visual data in a continuous state space. This entails jointly performing model class estimation, parameter inference, and thereby building a predictive model for new latent structures at test time in a meta-learning framework. The proposed Visual Causal Discovery Network (V-CDN) consists of three modules for visual perception, structure inference, and dynamics prediction (Figure 2). Specifically, we train the perception module that extracts unsupervised keypoints from the images to enable node discovery, building upon [9]. The inference module then takes the predicted keypoints and infers the exogenous variables that govern the interactions between each pair of keypoints using graph neural networks. Conditioned on the inferred graph, the dynamics module learns to predict the future movements of the keypoints. We consider a variety of configurations and scenarios, which gives us different combinations of variables, i.e., data from unknown interventions on the underlying system. Thus, we can hope to discover the correct underlying causal graph without explicit interventions. Experiments show that our proposed model is robust to input noise and works well on multi-body interactions with varying degrees of complexity. Notably, our method can facilitate counterfactual predictions and extrapolate to cases with a variable number of objects and scenarios where the underlying interaction graphs are never seen before. Experiments in a fabric environment also demonstrate the generalization ability of our method, where the same model can handle fabrics of different types and shapes, accurately identifying the dependency structure and modeling the underlying dynamics even when state variables are a reduced-order keypoint-based representation of the original system. 2 Visual Causal Discovery in Physical Systems: V-CDN In this section, we present the details of our model, which extracts structured representations from videos, discovers the causal relationships, infers the hidden confounding variables on the directed Figure 2: Model overview. Visual Causal Discovery Network (V-CDN) consists of three components: (a) a perception module to process the images and extract unsupervised keypoints as the state representation, (b) an inference module that observes the movements of the keypoints and determines the existence of the causal relations as well as the associated hidden confounders, and (c) a dynamics module that predicts the future by conditioning on the current state and the inferred causal summary graph. edges, and then predicts the future. Our model directly learns from raw videos, which recovers the underlying causal graph without any ground truth supervision. Problem formulation. We consider a dataset of M trajectories observed from a latent generative dynamical system, where each datapoint is generated with unknown interventions on both the underlying causal graph structure and parameters affecting the mechanism. The generative process of each episode follows a causal summary graph [2], Gm = (V1:T m , Em), m = 1 . . . M, where V1:T m contains the subcomponents underlying the system at different time steps and Em, which we assume is invariant over time, denotes the causal relationships between the constituting components. Specifically, for each directed edge (vm,i, vm,j) Em, there are both discrete and continuous hidden confounders denoting the type and parameters of the relationship that determines the computation of the underlying structural causal model (SCM) [10] and affects the behavior of the dynamical system. We further assume that in the dynamical system, there are no instantaneous edges or edges that go back in time. Note that the causal summary graph may contain cycles, but when spanning over time, the derived causal full time graph is a directed acyclic graph (DAG), as shown in Figure 2. In this work, we consider the case where we only have access to the data in the form of image sequences, Im = {I1:T m }, without any knowledge of the ground truth causal structure and the intervention being applied, where It m is an image of dimension H W, denoting the data we received at time t of episode m. The goal is to perform a one-shot recovery of the causal summary graph from a short sequence of images and simultaneously learn a shared dynamics model that operates on the identified graph to make counterfactual predictions into the future. This is a particularly challenging task and our method serves as a first step for tackling this problem in an end-to-end fashion using unsupervised intermediate keypoint representations. Overview of Visual Causal Discovery Network (V-CDN). We aim to find a temporally-consistent (and possibly reduced-order) keypoint-based representation from images using a perception module trained in an unsupervised way, Vt m = f V θ (It m), t = 1, . . . , T, (1) where the function f V θ , parameterized by θ, takes raw images as input and outputs a set of keypoints in 2-D coordinates, Vt m = {ot m,i|ot m,i R2}N i=1, that reflect the constituting components in the system. Then, we use an inference module, f E φ , parameterized by φ, that takes the sequence of detected keypoints as input and predicts the edge set, Em, Em = f E φ ( V1:T m ), (2) where Em = {(om,i, om,j, gm,ij)}. gm,ij includes gd m,ij and gc m,ij, denoting the latent discrete and continuous confounders associated with the directed edge from j to i at episode m. V1:T m and Em together constitute our discovered causal summary graph, conditioned on which, a dynamics module, f D ψ , parameterized by ψ, aims to predict the state of the keypoints at time T + 1, ˆVT +1 m = f D ψ ( V1:T m , Em). (3) By iteratively applying f D ψ , we are able to make long-term future predictions. The perception module, f V θ , the inference module, f E φ , and the dynamics modules, f D ψ , are shared among all episodes in the dataset consisting of various causal graphs with different discrete and continuous hidden confounders, which enables one-shot adaptation to an unseen graph at test time and allows counterfactual predictions by intervening on the identified graph and rolling into the future using the dynamics module. To train the system, we take an unsupervised keypoint detection algorithms [9] as our perception module and train it on the image set, I, for extracting temporally-consistent keypoints. The inference module and the dynamics module are trained together by minimizing the following objective: t L( Vt+1 m , f D ψ ( V1:t m , Em)) + λR( Em), (4) where R( ) is a regularizer imposed on the identified graph, e.g., to encourage sparsity. 2.1 Unsupervised keypoint detection from videos The perception module s task is to transform the images into a keypoint representation in an unsupervised way. In this work, we leverage the technique developed by Kulkarni et al. [9]. In particular, we use reconstruction loss over the pixels to encourage the keypoints to disperse over the foreground of the image. During training, it takes in a source image Isrc and a target image Itgt sampled from the dataset, and passes them through a feature extractor f V ω and a keypoint detector f V θ . The method then uses an operation called transport to construct a new feature map, Φ(Isrc, Itgt), using a set of local features indicated by the detected keypoints. A refiner network takes in the feature map and generates the reconstruction, ˆItgt. The module optimizes the parameters in the feature extractor, keypoint detector and refiner by minimizing a pixel-wise L2 loss, Lrec = Itgt ˆItgt , using stochastic gradient descent. By combining the keypoint-based bottleneck layer and the downstream reconstruction task, the model extracts temporally-consistent keypoints spreading over the foreground of the images. We denote the detected keypoints at time t as Vt m f V θ (It m), where Vt m = {ot m,i|ot m,i R2}N i=1. 2.2 Graph neural networks as the spatial encoder We use graph neural networks as a building block to model the interactions between different keypoints and generate objectand relation-centric embeddings. Both the inference and the dynamics modules will have the graph neural networks as a submodule to capture the underlying inductive bias. Specifically, for a set of N keypoints, we construct a directed graph G = (V, E), where vertices V = {oi} represent the information on the keypoints and edges E = {(oi, oj, gij)} represent the directed relation pointing to i from j, where gij denotes the associated edge attributes. We employ a graph neural network with a similar structure as the Interaction Networks (IN) [11] as our spatial encoder, denoted as φ, to generate the embeddings for the objects and the relations: ({hi}, {hij}) = φ(V, E). 2.3 Inferring the directed edge set of the Causal Summary Graph After we obtain the keypoints from the images, we use an inference module to discover the edge set of the causal summary graph and infer the parameters associated with the directed edges. The inference module takes the detected keypoints over a small time window within the same episode as input and outputs a posterior distribution over the structure of the graph. More specifically, we denote the keypoint sequence as V1:T m = {o1:T m,i}N i=1. Our goal is to predict the distribution of the edge set conditioned on the keypoint sequence using the parameterized inference function, pφ( Em| V1:T m ) f E φ ( V1:T m ). To achieve our goal, we first use a graph neural network, as discussed in Section 2.2, to propagate information spatially for each frame, which gives us both node and edge embeddings for each keypoint at each frame. We then aggregate the embeddings over the temporal dimension for each node and edge using a 1-D convolutional neural network. Another graph neural network takes in the temporal aggregations and predicts a discrete distribution over the edge types, where the first edge type denotes null edge . Conditioned on a sample from the discrete distribution, the model will then predict the continuous edge parameters. The edge type and edge parameters together constitute the causal summary graph, which determines the existence and the actual mechanism of the interactions between different constituent components. In particular, we first propagate the information spatially by feeding the keypoints through a graph neural network φenc, which gives us node and edge embeddings at each time step, ({ht m,i}, {ht m,ij}) = φenc( Vt m, Efc), (5) where the edge set, Efc, denotes a fully connected graph that contains an edge between each pair of keypoints with the edge attributes being zero. We then aggregate the information over the temporal dimension for each node and edge using 1-D convolutional neural networks (CNN): hm,i = CNNobj(h1:T m,i), hm,ij = CNNrel(h1:T m,ij), (6) which allows our model to handle input sequences of variable lengths. Taking in the aggregated node and edge embeddings, we use another graph neural network, φd, that only makes predictions over the edges to predict the categorical distribution over the edge type: {gd m,ij} = φd( Vm, Ed m), (7) where Vm = { hm,i}N i=1 and Ed m = {( hm,i, hm,j, hm,ij)|1 i, j N, i = j}. The output {gd m,ij} represents the probabilistic distribution over the type of each edge. When an edge is classified as the first type, i.e., gd m,ij = 1 is true, which we denote as null edge , it will be removed in subsequent computation and no information will pass through it. Sampling from this discrete distribution is straightforward, but we cannot backpropagate the gradients through this operation. Instead, we employ the Gumbel-Softmax [12,13] technique, a continuous approximation of the discrete distribution, to get the biased gradients, which makes end-to-end training possible. Conditioned on the inferred edge type {gd m,ij}, we would like to predict the continuous parameter on each one of the edges. For this purpose, we construct another edge set Ec m = {( hm,i, hm,j, hm,ij)|1 i, j N, i = j, gd m,ij = 1}, and use a new graph neural network, φc, to predict the continuous parameters: {gc m,ij} = φc( Vm, Ec m). (8) We denote the resulting edge set as Em = {(om,i, om,j, gm,ij)|1 i, j N, i = j, gd m,ij = 1}, where gm,ij = (gd m,ij, gc m,ij), indicating the topology of the causal summary graph with both the type and the continuous parameter of the edge effect. The inferred causal summary graph is then represented as Gm = ( V1:T m , Em). 2.4 Future prediction using the forward dynamics module The dynamics module, f D ψ , predicts the future movements of the keypoints by conditioning on the current state and the inferred causal graph: pψ(ˆVT +1 m | V1:T m , Em) f D ψ ( V1:T m , Em), where we instantiate f D ψ as a graph recurrent network, φdy ψ . Since we are directly operating on the predicted keypoints from the perception module, the detected keypoints contain noise and introduce uncertainty on the actual locations. Hence, in practice, we represent the position in the future steps using a multivariate Gaussian distribution, where we predict both the mean and the covariance matrix of the next state for each keypoint. 2.5 Optimizing the model The perception module is trained independently using the reconstruction loss, Lrec. To train the inference module and the dynamics module jointly, we instantiate the objective function shown in Equation 4 by making an analogy to the ELBO objective [14]: L = Epφ( Em| V1:T m )[log pψ(ˆVT +1 m | V1:T m , Em)] DKL(pφ( Em| V1:T m ) pψ( Em)), (9) For the prior pψ( Em), we assume that each edge is independent and use a factorized distribution over the edge types as the prior, where pψ( Em) = Q ij pψ( Em,ij). The inference module and the dynamics module are then trained end-to-end using stochastic gradient descent to maximize the objective L. Input images Predicted Figure 3: Unsupervised keypoint detection. The first row shows the input images, and the second row shows an overlay between the predicted keypoints and the image. The perception module assigns keypoints over the foreground of the images and consistently tracks the objects over time across different frames. 3 Experiments The goal of our experimental evaluation is to answer the following questions: (1) Can the model perform one-shot discovery of the causal summary graph and identify the hidden confounders, including both discrete and continuous variables? (2) How well can the model extrapolate to graphs of different sizes that are not seen during training? (3) How well can the learned model facilitate counterfactual prediction via intervening on the identified summary graph? Environment. We study our model in two environments: one includes masses, connected by invisible physical constraints, moving around in a 2-D plane, and the other one contains a fabric of various shapes where we are applying forces to deform it over time (Figure 3). Multi-Body Interaction. There are 5 balls of different colors moving around. At the beginning of each episode, we sample the invisible physical relations between each pair of balls independently, giving us the ground truth Em that is fixed throughout the episode. For each pair of balls, there is a one-third probability that they are not connected or linked by a rigid rod or a spring. We also sample the continuous parameters for each existing edge and fix them within the episode, e.g., the length of the rigid relation or the rest length of the spring. Fabric Manipulation. We set up fabrics of three different types: a shirt, pants, and a towel, where we also vary the shape of the fabrics like the length of the pant leg or the height and width of the towel (Figure 5). We also apply forces on the contour of the fabric to deform and move it around. Our goal is to produce one single model that can handle fabrics of different types and shapes, instead of training separate models for each one of them. 3.1 Results on unsupervised keypoint detection We employ the same architecture and training procedure described in [9] to train our perception module, f V θ . Figure 3 shows some qualitative results. Our perception module can spread the keypoints over the foreground of the image and consistently track the object. Please refer to our project page for video illustrations. 3.2 Discovery of the Causal Summary Graph and the hidden confounders The inference module, f E φ , takes in a short sequence of the detected keypoints and aims to discover whether there is a causal relation, i.e., a physical connection, between each pair of keypoints and identifies the hidden confounders like the edge type and the edge parameters. The predicted graph will be conditioned by the dynamics module, f D ψ , for future prediction. The optimization procedure does not require any supervision on the attributes associated with the edges, which allows us to infer the hidden confounders in an unsupervised way. In the Multi-Body environment, the perception module accurately tracks the location of balls, which allows us to perform a systematic evaluation of the model s performance by comparing its prediction with the ground truth causal summary graph used to generate the episodes. Because we are working in an unsupervised regime, where the predicted edge type is in a discrete latent space distinguishing between null edge, spring, and rigid relation, we need to find a global one-on-one mapping between (a) Accuracy on edge type: {null edge, spring, rigid} (b) Entropy of the probability distribution on edge type (c) Scatter plot on the rest length of the spring relation (d) Scatter plot on the length of the rigid relation Ground truth hidden confounder Ground truth hidden confounder Predicted edge parameter Predicted edge parameter Figure 4: Results on discovering the Causal Summary Graph. Shown in (a) and (b), the accuracy of edge-type classification increases as the inference module observes more frames, which also effectively decreases the uncertainty, calculated as the entropy of the predicted distribution. As exhibited in (c) and (d), there is a strong correlation between the inferred continuous variable and the ground truth hidden confounder. Figure 5: Qualitative results on predicting the Causal Summary Graph and the future. Our inference module observes a short sequence of images and performs one-shot discovery of the causal summary graph, which recovers the ground truth graph in the Multi-Body environment and captures the underlying connectivity structures in the Cloth environment. The unfilled circles in the right four columns indicate the model s prediction into the future. We overlay the predicted future keypoints with the truth future for comparison. (a) Accuracy on edge type: {null edge, spring, rigid} (b) Correlation on the rest length of the spring relation (c) Correlation on the length of the rigid relation (d) Mean squared error on future prediction Figure 6: Results on extrapolating to unseen graphs of different sizes. Our inference module and dynamics module are trained only in environments containing 5 bodies. Thanks to the inductive bias captured by the graph neural networks in our model, it automatically generalizes to scenarios with different numbers of bodies from training. The blue bars in the figures show the performance on the test set in the same distribution we trained on, and the orange bars illustrate results on extrapolation. Surprisingly, the model has a better performance in environments with 3 and 4 balls, even if the model has never seen them before. the prediction, {gd m}, and the ground truth. We pick the one that gives us the highest accuracy, with the constraint that the first type, where there is no information passing through in the subsequent dynamics prediction, always corresponds to null edge. After the mapping, we evaluate the model s ability to predict the continuous confounder, {gc m}, by computing its correlation with the ground truth physical parameters like rest length of the spring connection. The results are shown in Figure 4. As the model observes more frames, the classification accuracy increases, and the uncertainty decreases, which correlates with our intuition that as we obtain more observations from the environment, we have a better estimate of the exogenous variables that govern the behavior of the system. We also show the comparison with a baseline that is the same as our method except that it does not have the inference module. Our model significantly outperforms the baseline, indicating the importance of the correct modeling of the causal mechanism (Figure 6 (d)). Figure 5 shows some qualitative results, where we include side-by-side comparisons between the identified causal summary graph and the ground truth. For the cloth environment, the keypoints on the fabrics act as a reduced-order representation of the original system, where we do not know the ground truth causal summary graph. We encode the action as a 6-dimensional vector: the first three are the coordinates of the dragged point, and the other three indicate the movement, which will then be concatenated with the embedding of every keypoint. As shown in Figure 5, the same inference module produces different causal graphs for different types of fabrics that reflect the underlying connectivity patterns, which illustrates the model s ability to recognize the underlying dependency structure. 3.3 Extrapolation to unseen causal graphs of different sizes To evaluate our model s performance on extrapolation, we also create another 4 test sets in the Multi-Body environment, including 3, 4, 6, and 7 bodies, respectively, for which we need to train separate perception modules to reflect the number of moving components. However, the inference module and the dynamics module do not require retraining; instead, they can directly generalize to systems of different numbers of bodies. As shown in Figure 6, the blue bar shows the performance on the test set that has the same number of balls as the training set, while the other bars illustrate the model s ability to perform extrapolation. Interestingly, for environments with fewer balls, e.g., 3 or 4 balls, even if the model is not directly trained on these scenarios, the performance is yet better. 3.4 Counterfactual prediction and extrapolation on parameter change In our experiment, we make counterfactual predictions by intervening on the estimated hidden confounders and evaluate how well the model predicts the future by making the same intervention on the ground truth simulator. The estimated confounders are in the latent space, which requires a mapping function to get the corresponding parameters in the original simulator. We use the same mapping as described in Section 3.2 to find the corresponding discrete variables, and train a simple linear regressor for transforming the continuous variable. Figure 7 shows the performance on (a) Intervention on the rest length in spring (b) Intervention on the length of the rigid relation (c) Intervention on edge type Figure 7: Results on counterfactual prediction. We make counterfactual predictions by intervening on the identified causal summary graph and evaluate the performance by comparing the predicted future with the original simulator undergoing the same intervention at T + 30. The modeling of the causal mechanism allows it to extrapolate to parameter ranges outside the training distribution. counterfactual predictions, which illustrates our model s ability to answer what if questions and extrapolate to parameter ranges that are outside the training distribution. 4 Related Work Causal Discovery. Methods for causal inference from observations can broadly be categorized into three classes. Constraint-based methods (such as PC and FCI) rely on conditional independence tests as constraint-satisfaction to recover Markov-Equivalent Graphs [1, 15, 16]. Score-based methods (such as GES) assign a score to each DAG, and perform searching in this score space [17, 18]. The third class of methods exploits such asymmetries or causal footprints to uniquely identify a DAG [19 22]. Further, causal discovery from a combination of observational and interventional data has been studied in the literature [23 30]. Many of these approaches either assume full knowledge of the intervention, make strong assumptions about the model class, or have scalability limitations. Relational Neural Models. Several works have attempted modeling multi-body dynamics with graphs [7,11,31] and attention [32,33]. However, these methods assume the latent generative causal graph is stationary, resulting in poor generalization to variations in either graph structure or its functional parameters. A few recent works [34, 35] have tried to infer the relationship between different entities in the system using a variational or meta-learning framework, where [34] also discussed connection to Granger causality. Still, we differ from them by directly working with image data and modeling not only the discrete but also the continuous hidden confounding variables. Dynamics from Videos. Video modeling and prediction have found much attention recently [36 39]. The idea of learned latent space embeddings for unsupervised loss computation has also enjoyed recent success in prediction [40 44]. However, the latent space may not be interpretable and overall model may not generalize. In contrast, keypoints (or particles) provide succinct and generalizable representions across a variety of use cases: particle representation [45 51], deformable object modelling [52,53], instance independent class templates [54]. However, providing domain-specific labeled data can be tedious, hence unsupervised keypoint learning methods using reconstruction or view-consistency as loss have broader appeal [9,55]. This paper builds on ideas from unsupervised visual representation learning and leverages it for visual causal discovery wherein the underlying model components use relational modeling to output a Causal Summary Graph, which has not been achieved in prior work for complex video datasets. 5 Conclusion Our method extracts a structured keypoint-based representation from videos, identifies the causal relationships between different constituting components, and makes predictions into the future. The model neither assumes access to the ground truth causal graph, nor the hidden confounders, nor the dynamics that describes the effect of the physical interactions; instead, we learn to discover the dependency structures and model the causal mechanisms end-to-end from images in an unsupervised way, which we hope can facilitate future studies of more generalizable visual reasoning systems. Acknowledgments We thank the entire team at NVIDIA Robotics Research Lab for their valuable feedback. We also thank the anonymous reviewers for their useful comments. The main body of this work took place when Yunzhu Li was a research intern at NVIDIA. Broader Impact Causal reasoning is the process of identifying causality: the relationship between a cause and its effect, which is at the core of human intelligence. Learning directly from the observations without the modeling of the underlying causal structure can lead to the emergence of incorrect associations between the input and the output. The learned model can overfit to the bias associated with the dataset, limiting its ability to generalize outside the training distribution and often leading to catastrophic outcomes when deploying in the real world. Discovering the causal relationships typically requires learning from data collected in randomized controlled trials or A/B tests where the experimenter controls certain variables of interest. However, carrying out the intervention or randomized trials may be impossible or at least impractical or unethical in many situations. This work aims at discovering the causal structure and modeling the underlying causal mechanism from visual inputs, where we have access to data from different configurations and scenarios under unknown interventions both on the structure of the causal graph and its parameters. The ability to accurately capture the dependency structures and identify the hidden confounders is of vital importance towards helping the learned models generalize. As we discussed in our experiments, causal modeling improved generalization to both outside the training distribution and also towards high-likelihood counterfactual data augmentation. While excited about these results, it is important to acknowledge that this is a particularly challenging task, and our method serves as an initial step towards the broader goal of building physically grounded visual intelligence. We mainly focus on the modeling of the dynamical system, while some aspects of the causal graph, such as sophisticated dependencies and practical issues arising from sampling rates, are not touched upon. Nonetheless, we hope to draw people s attention to this grand challenge and inspire future research on generalizable physically grounded reasoning from visual inputs without domain-specific feature engineering. [1] P. Spirtes, C. N. Glymour, R. Scheines, and D. Heckerman, Causation, prediction, and search. MIT press, 2000. [2] J. Peters, D. Janzing, and B. Schölkopf, Elements of causal inference: foundations and learning algorithms. MIT press, 2017. [3] C. Glymour, K. Zhang, and P. Spirtes, Review of causal discovery methods based on graphical models, Frontiers in Genetics, vol. 10, 2019. [4] K. Zhang, M. Gong, J. Ramsey, K. Batmanghelich, P. Spirtes, and C. Glymour, Causal discovery in the presence of measurement error: Identifiability conditions, ar Xiv preprint ar Xiv:1706.03768, 2017. [5] N. Watters, D. Zoran, T. Weber, P. Battaglia, R. Pascanu, and A. Tacchetti, Visual interaction networks: Learning a physics simulator from video, in Advances in neural information processing systems, pp. 4539 4547, 2017. [6] M. Janner, S. Levine, W. T. Freeman, J. B. Tenenbaum, C. Finn, and J. Wu, Reasoning about physical interactions with object-centric models, in International Conference on Learning Representations, 2019. [7] A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap, A simple neural network module for relational reasoning, in Advances in neural information processing systems, pp. 4967 4976, 2017. [8] M. Gong, K. Zhang, B. Schölkopf, C. Glymour, and D. Tao, Causal discovery from temporally aggregated time series, in Uncertainty in artificial intelligence: proceedings of the... conference. Conference on Uncertainty in Artificial Intelligence, vol. 2017, NIH Public Access, 2017. [9] T. D. Kulkarni, A. Gupta, C. Ionescu, S. Borgeaud, M. Reynolds, A. Zisserman, and V. Mnih, Unsupervised learning of object keypoints for perception and control, in Advances in Neural Information Processing Systems, pp. 10723 10733, 2019. [10] J. Pearl, Causality. Cambridge university press, 2009. [11] P. Battaglia, R. Pascanu, M. Lai, D. J. Rezende, et al., Interaction networks for learning about objects, relations and physics, in Advances in neural information processing systems, 2016. [12] E. Jang, S. Gu, and B. Poole, Categorical reparameterization with gumbel-softmax, ar Xiv preprint ar Xiv:1611.01144, 2016. [13] C. J. Maddison, A. Mnih, and Y. W. Teh, The concrete distribution: A continuous relaxation of discrete random variables, ar Xiv preprint ar Xiv:1611.00712, 2016. [14] D. P. Kingma and M. Welling, Auto-encoding variational bayes, ar Xiv preprint ar Xiv:1312.6114, 2013. [15] D. Entner and P. O. Hoyer, On causal discovery from time series data using fci, Probabilistic graphical models, pp. 121 128, 2010. [16] D. Colombo, M. H. Maathuis, M. Kalisch, and T. S. Richardson, Learning high-dimensional dags with latent and selection variables, in UAI, p. 850, 2011. [17] D. M. Chickering, Optimal structure identification with greedy search, Journal of machine learning research, vol. 3, no. Nov, pp. 507 554, 2002. [18] X. Zheng, B. Aragam, P. K. Ravikumar, and E. P. Xing, Dags with no tears: Continuous optimization for structure learning, in Advances in Neural Information Processing Systems, pp. 9472 9483, 2018. [19] S. Shimizu, Lingam: Non-gaussian methods for estimating causal structures, Behaviormetrika, vol. 41, no. 1, pp. 65 98, 2014. [20] D. Kalainathan, O. Goudet, I. Guyon, D. Lopez-Paz, and M. Sebag, Sam: Structural agnostic model, causal discovery and penalized adversarial learning, ar Xiv preprint ar Xiv:1803.04929, 2018. [21] O. Goudet, D. Kalainathan, P. Caillou, I. Guyon, D. Lopez-Paz, and M. Sebag, Causal generative neural networks, ar Xiv preprint ar Xiv:1711.08936, 2017. [22] K. Zhang and A. Hyvärinen, On the identifiability of the post-nonlinear causal model, in 25th Conference on Uncertainty in Artificial Intelligence (UAI 2009), pp. 647 655, AUAI Press, 2009. [23] A. Hyttinen, F. Eberhardt, and P. O. Hoyer, Experiment selection for causal discovery, The Journal of Machine Learning Research, vol. 14, no. 1, pp. 3041 3071, 2013. [24] A. Ghassami, S. Salehkaleybar, N. Kiyavash, and E. Bareinboim, Budgeted experiment design for causal structure learning, in International Conference on Machine Learning, pp. 1724 1733, 2018. [25] M. Kocaoglu, K. Shanmugam, and E. Bareinboim, Experimental design for learning causal graphs with latent variables, in Advances in Neural Information Processing Systems, pp. 7018 7028, 2017. [26] Y. Wang, L. Solus, K. Yang, and C. Uhler, Permutation-based causal inference algorithms with interventions, in Advances in Neural Information Processing Systems, pp. 5822 5831, 2017. [27] K. Shanmugam, M. Kocaoglu, A. G. Dimakis, and S. Vishwanath, Learning causal graphs with small interventions, in Advances in Neural Information Processing Systems, pp. 3195 3203, 2015. [28] J. Peters, P. Bühlmann, and N. Meinshausen, Causal inference by using invariant prediction: identification and confidence intervals, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 78, no. 5, pp. 947 1012, 2016. [29] D. Rothenhäusler, C. Heinze, J. Peters, and N. Meinshausen, Backshift: Learning causal cyclic graphs from unknown shift interventions, in Advances in Neural Information Processing Systems, pp. 1513 1521, 2015. [30] N. R. Ke, O. Bilaniuk, A. Goyal, S. Bauer, H. Larochelle, C. Pal, and Y. Bengio, Learning neural causal models from unknown interventions, ar Xiv preprint ar Xiv:1910.01075, 2019. [31] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al., Relational inductive biases, deep learning, and graph networks, ar Xiv:1806.01261, 2018. [32] A. Goyal, A. Lamb, J. Hoffmann, S. Sodhani, S. Levine, Y. Bengio, and B. Schölkopf, Recurrent independent mechanisms, ar Xiv preprint ar Xiv:1909.10893, 2019. [33] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, Attention is all you need, in Advances in neural information processing systems, pp. 5998 6008, 2017. [34] T. Kipf, E. Fetaya, K.-C. Wang, M. Welling, and R. Zemel, Neural relational inference for interacting systems, in International Conference on Machine Learning, pp. 2688 2697, 2018. [35] F. Alet, E. Weng, T. Lozano-Pérez, and L. P. Kaelbling, Neural relational inference with fast modular meta-learning, in Advances in Neural Information Processing Systems, pp. 11804 11815, 2019. [36] Y. Ye, M. Singh, A. Gupta, and S. Tulsiani, Compositional video prediction, in Proceedings of the IEEE International Conference on Computer Vision, pp. 10353 10362, 2019. [37] J.-T. Hsieh, B. Liu, D.-A. Huang, L. F. Fei-Fei, and J. C. Niebles, Learning to decompose and disentangle representations for video prediction, in Advances in Neural Information Processing Systems, pp. 517 526, 2018. [38] M. Kumar, M. Babaeizadeh, D. Erhan, C. Finn, S. Levine, L. Dinh, and D. Kingma, Videoflow: A conditional flow-based model for stochastic video generation, ar Xiv preprint ar Xiv:1903.01434, 2019. [39] K. Yi, C. Gan, Y. Li, P. Kohli, J. Wu, A. Torralba, and J. B. Tenenbaum, {CLEVRER}: Collision events for video representation and reasoning, in International Conference on Learning Representations, 2020. [40] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller, Embed to control: A locally linear latent dynamics model for control from raw images, in Advances in neural information processing systems, pp. 2746 2754, 2015. [41] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson, Learning latent dynamics for planning from pixels, in International Conference on Machine Learning, 2019. [42] Y. Li, J. Wu, J.-Y. Zhu, J. B. Tenenbaum, A. Torralba, and R. Tedrake, Propagation networks for model-based control under partial observation, in ICRA, 2019. [43] Y. Li, H. He, J. Wu, D. Katabi, and A. Torralba, Learning compositional koopman operators for model-based control, in International Conference on Learning Representations, 2020. [44] D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi, Dream to control: Learning behaviors by latent imagination, in International Conference on Learning Representations, 2020. [45] M. Macklin, M. Müller, N. Chentanez, and T.-Y. Kim, Unified particle physics for real-time applications, ACM Transactions on Graphics (TOG), vol. 33, no. 4, p. 153, 2014. [46] D. Mrowca, C. Zhuang, E. Wang, N. Haber, L. F. Fei-Fei, J. Tenenbaum, and D. L. Yamins, Flexible neural representation for physics prediction, in Advances in Neural Information Processing Systems, pp. 8799 8810, 2018. [47] Y. Li, J. Wu, R. Tedrake, J. B. Tenenbaum, and A. Torralba, Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids, in ICLR, 2019. [48] B. Ummenhofer, L. Prantl, N. Thuerey, and V. Koltun, Lagrangian fluid simulation with continuous convolutions, in International Conference on Learning Representations, 2020. [49] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. W. Battaglia, Learning to simulate complex physics with graph networks, in International Conference on Machine Learning, 2020. [50] Y. Li, T. Lin, K. Yi, D. Bear, D. L. Yamins, J. Wu, J. B. Tenenbaum, and A. Torralba, Visual grounding of learned physical models, in International Conference on Machine Learning, 2020. [51] L. Manuelli, Y. Li, P. Florence, and R. Tedrake, Keypoints into the future: Self-supervised correspondence in model-based reinforcement learning, ar Xiv preprint ar Xiv:2009.05085, 2020. [52] T. Jakab, A. Gupta, H. Bilen, and A. Vedaldi, Unsupervised learning of object landmarks through conditional image generation, in Advances in Neural Information Processing Systems, pp. 4016 4027, 2018. [53] S. Suwajanakorn, N. Snavely, J. J. Tompson, and M. Norouzi, Discovery of latent 3d keypoints via end-to-end geometric reasoning, in Advances in Neural Information Processing Systems, pp. 2059 2070, 2018. [54] L. Manuelli, W. Gao, P. Florence, and R. Tedrake, kpam: Keypoint affordances for categorylevel robotic manipulation, ar Xiv preprint ar Xiv:1903.06684, 2019. [55] A. Dundar, K. J. Shih, A. Garg, R. Pottorf, A. Tao, and B. Catanzaro, Unsupervised disentanglement of pose, appearance and background from images and videos, ar Xiv preprint ar Xiv:2001.09518, 2020.