Moderate Empiricism and Deep Learning Research
In recent years, deep learning systems like AlphaGo, AlphaFold, DALL-E, and ChatGPT have blown through expected upper limits on artificial neural network research, which attempts to create artificial intelligence in computational systems by replicating aspects of the brain's structure. These deep learning systems are of unprecedented scale and complexity in the size of their training set and parameters, however, making it very difficult to understand how they perform as well as they do. In this talk, I outline a framework for thinking about foundational philosophical questions in deep learning as artificial intelligence. Specifically, my framework links deep learning's research agenda to a strain of thought in classic empiricist philosophy of mind. Both empiricist philosophy of mind and deep learning are committed to a Domain-General Modular Architecture (a “new empiricist DoGMA”) for cognition in network based systems. In this version of moderate empiricism, active, general-purpose faculties--such as perception, memory, imagination, attention, and empathy--play a crucial role in allowing us to extract abstractions from sensory experience. I illustrate the utility of this interdisciplinary connection by showing how it can provide benefits to both philosophy and computer science: computer scientists can continue to mine the history of philosophy for ideas and aspirational targets to hit on the way to more robustly rational artificial agents, and philosophers can see how some of the historical empiricists’ most ambitious speculations can be realized in specific computational systems.
connect with us