Learning about the causal structure of the world is a fundamental problem for human cognition, and causal knowledge is central to both intuitive and scientific theories. Cognitive scientists have applied advances in our formal understanding of causation in philosophy and computer science, particularly within the Causal Bayes Net formalism, to understand human causal learning. In parallel, in the very different tradition of reinforcement learning, researchers have developed the idea of an intrinsic reward signal called “empowerment”. An agent is rewarded for maximizing the mutual information between its actions and their outcomes, regardless of the external reward value of those outcomes. In other words, the agent is rewarded if variation in an action systematically leads to parallel variation in an outcome so that variation in the action predicts variation in the outcome. The result is an agent that has maximal control over its environment. Gopnik argues that “empowerment” may be an important bridge between classical Bayesian causal learning and reinforcement learning and may help to characterize causal learning in humans and enable it in machines. More strongly, Gopnik makes the philosophical argument that causal learning and empowerment gain are very closely related conceptually.  If an agent learns an accurate causal model of the world they will necessarily increase their empowerment, and, vice versa, increasing empowerment will lead to a more accurate (if implicit) causal model of the world. This has implications for both accounts of causal learning in cognitive science and AI and for the metaphysics of causation. Empowerment may also explain distinctive empirical features of children’s causal learning, as well as providing a more tractable computational account of how that learning is possible.