Neurorobotics: Connecting the Brain, Body and Environment
Neurorobotics: Connecting the Brain, Body and Environment
- December 2, 2022
- New book by UCI cognitive sciences researchers explores intelligent autonomous systems grounded in biology and neuroscience
Neurorobots—autonomous systems modeled after some aspect of the brain—offer researchers a powerful tool for studying neural function. A relatively new field, neurorobotics combines and builds on ideas from artificial intelligence, cognitive sciences, computer science and engineering to psychology, neuroscience, and robotics. In their new book, Neurorobotics: Connecting the Brain, Body and Environment (MIT Press), UCI cognitive scientist Jeff Krichmar and research scientist Tiffany Hwu '19 explore approaches and design principles for developing intelligent autonomous systems grounded in biology and neuroscience. Below, the pair talk about the genesis of the book and the future of a field that may hold the keys to developing more lifelike artificial intelligence.
Describe for us the field of neurorobotics - where it originated, what drives its growth, and areas or industries in which it’s applied.
JK: The idea originated with the realization that the brain requires a body and the body is interacting with the world. In neurorobotics, we simulate the brain and use a robot as its body and test its behavior in the real world. As far back as the 1950’s researchers were embodying artificial neural networks on robots. There are two main goals with neurorobotics. One goal is to use the neurorobot to test brain theories by having a complete system in which we have access to its entire nervous system during the lifetime of the robot’s behavior. This is something that is currently impossible to do in neuroscience and cognitive science studies with humans and other animals. The other goal is to use neurorobots as a method for developing true intelligent systems. Artificial Intelligence has made great progress, but still comes up short compared to the behavior of animals with nervous systems. If we follow the brain-body interactions as a working model for intelligence, we may achieve artificial systems that have a level of intelligence comparable to natural systems.
What drew you to this type of research? How do your backgrounds in cognitive science, engineering and computer science inform your work?
TH: I was drawn to this type of research because of its interdisciplinary nature. In undergrad, I majored in computer science and cognitive science, and I really enjoyed the fact that you could combine ideas from multiple fields to understand intelligence. In graduate school, studying neurorobotics taught me a whole new set of skills on the hardware and robotics side, allowing me to build models of intelligence in the real world. Neurorobotics research requires you to think flexibly and learn new skills on the fly, much like the intelligent animals we are studying. It’s a lot of fun and keeps you on your toes.
JK: My story is after receiving my undergraduate degree in computer science, I worked in industry on real-time and embedded systems where I was writing software to listen to sensors and control things like motors, displays, agent behavior, etc. When I went back to graduate school, I became very interested in the brain and how it leads to intelligent behavior. After getting my Ph.D., I wanted to combine these two experiences into a research program. Just like my embedded systems work, our brains take in sensory information (sights, sounds, smells, touch) from the world and use that information to control our bodies in purposeful ways. That is what neurorobotics is all about!
Give us some examples of neurorobots used in practice and how cognitive science and neuroscience have contributed to their development.
JK: I will let Tiffany speak about her amazing work on neurorobotics. Over the years, we have made robots that can be trained like a dog to seek rewards and avoid punishment, robots that show risk-taking and risk-averse behavior to study attention deficit disorders and depression, and robots that anticipate a person’s needs. Recently, our lab has been developing memory models to help robots build maps of their environment and use those maps to navigate around.
TH: During my Ph.D., I worked on a model of spatial navigation for robots using neuromorphic computing. Neuromorphic computing is inspired by neuroscience principles of the brain, particularly the electrical impulses and connections between brain cells. When using neuromorphic computing for navigation, the robot was able to navigate more similarly to an intelligent animal, using far less energy than a traditional approach. This is an example of how neuroscience contributes to the development of useful and practical neurorobots.
What does the future of neurorobotics look like?
TH: I think neurorobotics provides a perspective that the more mainstream approach to artificial intelligence might be missing. For example, self-driving cars are still lacking the common sense and intuition about the real world that humans have. I could see a future in which neurorobotics principles are incorporated more and more into the artificial intelligence systems we use in our daily lives.
JK: The only thing I would add to what Tiffany said is that neurorobotics could be another tool used by neuroscientists and cognitive scientists to investigate the inner workings of the brain and to develop assistive technology to help the impaired.
Anything else you’d like to add?
JK: Just a bit of history. I teach cognitive robotics to the cognitive sciences and psychology students at UC Irvine. Tiffany was my teaching assistant in the class for many years. We realized that there wasn’t a book for the material that we wanted to cover in the class. I was thrilled and relieved that Tiffany agreed to co-author this book after she graduated with her Ph.D. and was working full-time. I am excited to use it in our class and hope many others will use it to teach or just read it to learn more about this interesting topic. We hope it inspires new generations of neuroroboticists!
TH: As a first-time textbook author, I learned a whole lot about the writing and publishing process. It was so rewarding to see the book start from an idea three years ago and end up with a physical copy of the textbook in hand. Jeff and I wrote the book for a wide audience, including students, educators, and hobbyists. There’s something in it for everyone, so feel free to check it out!
Jeffrey Krichmar is professor of cognitive sciences at UCI whose prior industry experience in computational sciences and software engineering has included work on Raytheon Corporation’s PATRIOT Missile System and IBM’s Federal Systems Division Air Traffic Control system. He earned his Ph.D. in computational sciences and informatics from George Mason University. Tiffany J. Hwu earned her Ph.D. in cognitive sciences from UCI and is currently a research scientist working on projects in autonomous agents and human-machine communication.
Would you like to get more involved with the social sciences? Email us at communications@socsci.uci.edu to connect.
Share on:
connect with us