Building AI simulations of the human brain | Wu Tsai Neurosciences Institute
Last month, Stanford researcher Andreas Tolias and colleagues created a "digital twin" of the mouse visual cortex. The researchers used the same foundation model approach that powers ChatGPT, but instead of training the model on text, the team trained in on brain activity recorded while mice watched action movies. The result? A digital model that can predict how neurons would respond to entirely new visual inputs.
This landmark study is a preview of the unprecedented research possibilities made possible by foundation models of the brain—models which replicate the fundamental algorithms of brain activity, but can be studied with complete control and replicated across hundreds of laboratories.
But it raises a profound question: Are we ready to create digital models of the human brain?
This week we talk with Wu Tsai Neuro Faculty Scholar Dan Yamins, who has been exploring just this question with a broad range of Stanford colleagues and collaborators. We talk about what such human brain simulations might look like, how they would work, and what they might teach us about the fundamental algorithms of perception and cognition.
AI models of the brain could serve as 'digital twins' in research (Stanford Medicine, 2025)
An Advance in Brain Research That Was Once Considered Impossible (New York Times, 2025)
The co-evolution of neuroscience and AI (Wu Tsai Neuro, 2024)
Neuroscientists use AI to simulate how the brain makes sense of the visual world (Wu Tsai Neuro, 2024)
How Artificial Neural Networks Help Us Understand Neural Networks in the Human Brain (Stanford Institute for Human-Centered AI (HAI), 2021)
Related Research
A Task-Optimized Neural Network Replicates Human Auditory Behavior... (PNAS, 2014)
Vector-based navigation using grid-like representations in artificial agents (Nature, 2018)
The neural architecture of language: Integrative modeling converges on predictive processing (PNAS, 2021)
Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations... (Neuron, 2021)
This episode was produced by Michael Osborne at 14th Street Studios, with sound design by Morgan Honaker. Our logo is by Aimee Garza. The show is hosted by Nicholas Weiler at Stanford's Wu Tsai Neurosciences Institute and supported in part by the Knight Initiative for Brain Resilience.
If you're enjoying our show, please take a moment to give us a review on your podcast app of choice and share this episode with your friends. That's how we grow as a show and bring the stories of the frontiers of neuroscience to a wider audience.
We want to hear from YOUR neurons! Email us if you'd be willing to help out with some listener research, and we'll be in touch with some follow-up questions.
Welcome back to From Our Neurons to Yours from the Wu Tsai Neurosciences Institute at Stanford University—where we bring you to the frontiers of neuroscience.
This week on the show, Foundation Models of the Brain.
Last month, a major group of papers came out sponsored by IARPA and the NIH BRAIN Initiative. It's called the MICrONS Project. This is just a landmark set of papers mapping out visual areas of the mouse brain in just incredible detail, more detail than anyone's been able to map the brain before. There's one piece of this project that I want to talk with you about to introduce today's show because Stanford researcher, Andreas Tolias, who's one of the leaders of the Microns project, has created what he's calling a digital twin of the mouse visual cortex.
What do we mean by digital twin? That's kind of a science fiction sounding term, but basically it means a simulation. It uses the same kind of algorithm that powers large language models like ChatGPT and systems like DALL-E. But instead of being trained on huge quantities of text or image data, it's trained on brain data, data that Tolias and his lab recorded from the mice as they were watching hours and hours of action movies, which apparently mice enjoy. So, while the mice were watching Mad Max and other movies, the lab was imaging the activity of the neurons in their visual cortex. And then, by feeding all of that data into a foundation model, they were able to create a simulation of that part of the mouse brain. Basically, they could now ask it what would the visual cortex do if we showed it a different movie or if the mouse was looking at some other kind of input altogether?
And so, like with LLMs, it has this amazing ability to generalize outside of its training set. And that's a really big deal because what it's basically done is it's created a digital copy of the algorithm that the mouse visual cortex appears to be running. This is another way to get access to the brain and do kinds of experiments that you could never do in a real mouse. The model doesn't sleep, doesn't age, can be replicated, hundreds or thousands of people could do experiments on it. It's a really amazing tool for neuroscience. We'll post links in the show notes.
But I bring this up because it raises a big question. It's great to have a digital model of a piece of the mouse visual cortex, but are we ready to create a model like this of the human brain? What would that look like and what would it take?
To answer that question, we asked Wu Tsai Neuro faculty scholar Dan Yamins on the show. Dan is a faculty member in the Departments of Psychology and Computer Science, and he's made his career simulating aspects of the brain to try to understand what are the fundamental algorithms of perception and cognition. And I've been talking with Dan for a while about his ideas about how we would go about creating a foundation model, a digital twin of the human brain. I started by asking him to share with us what that big picture would look like and how it would work.
Yeah, I think that the large language model analogy is pretty good. The idea is to capture brain data, not the data like the memories of any one individual or anything like that, but more like the processes of information processing, decision-making and brain circuits more generally from the brain and capture those in a model, kind of like the models that are underlying large language models, but now trained to make predictions about and to mimic some of the processes of the human brain.
And so, is the idea there that large language models are trained on a whole lot of text and then they can generalize from basically any input to a reasonable output? This would be trained on a whole lot of brain data and it could similarly generalize, like it can give you an insight in predict what the brain would do in any given situation.
I think that's the ideal. Whether or not it will actually be able to do that remains to be seen. Large language models are definitely not perfect in their predictions, and so, I think we're likely to be imperfect here as well. But the goal is to take a first step at trying to capture patterns of brain processing in a neural network model, again, without it being any individual person or something like that, which would be quite beyond our capacities at this point. But just more basically trying to capture some aspects of how the brain processes vision input, how it processes audio input, how it sequences the way that you move your limbs to express your decisions about what to do in response to different things you might see or hear or achieve certain goals.
Those algorithms are written in the brain right now. The brain implements processes, algorithms of information processing and motor decision-making and memory and attention and all sorts of things like that. The brain implements that. We want to make an active copy of those algorithms, and it won't be a perfect copy, but it will be as best as we can, a kind of record of and a runnable predictive model for the algorithms that are implemented in the brain effectively. That's the goal.
I love that idea of sort of is this a way that we can capture what is the algorithm or the algorithms that the brain is running? And I'd love to delve into the details a little bit more with you. But first actually, so why at this moment in neuroscience, what would people do with a model like this?
Well, so the key reason why we want to do this is because unlike a kind of direct AI goal, what we are interested in doing is helping work with needs of humans. Humans have real brains and those brains are often functioning really well, but sometimes there are things we'd like to fix about them effectively. So, in the form of some neuropsychiatric conditions in the form of just understanding how the brain functions so we can help keep it healthy, the core goal is to take the model of this kind and help us basically do biomedical outcomes. We want to be able to use the model of the digital brain as a neurobiomedical discovery and diagnostic design platform.
There's this intersection in neuroscience, it seems to me, from all the people that we've been talking with where we're getting a better sense of how some of the circuits in the brain work, or at least which circuits seem to be connected to certain disorders or functions of the brain. We're also developing better and better tools for stimulating or quieting brain circuits essentially sort of shifting those circuits into healthier function. And so now, it seems like this AI component that you're talking about, these foundational models, maybe those give us the link here where we can say, "Well, what would happen if we did this particular type of stimulation? How do we expect the brain will respond to that? How is that going to affect how the brain is processing sensory input or affecting behavior?"
Yeah, exactly. So, if you have a digital brain, it won't have a personality or anything like that, but what it will be able to do is you can show it images or you can make it listen to sounds. You can have it issue motor commands, like do the brain equivalent of move an arm. You can have it analyze how it would remember something or what it would pay attention to. And by virtue of that, you can sort of be like, "Okay, well let's say I would like to ask what are the predicted responses of the brain to a set of diagnostic stimuli for say, ADHD or for epilepsy or for depression?" You could use the model to make those predictions without having to actually do anything with an individual real person so that basically you can get lots of tests quickly at low cost and be able to analyze real data from individuals in a way that is much more effective.
And similarly, if you wanted to model the effect of a psychiatric drug, you could ask, "Okay, look, we can measure how psychiatric drugs operate, and we can ask what does that mean in terms of the way that the brain processes and information and make predictions about ex post facto if we had used this drug instead of that drug, or if we had used no drug at all." It could allow us to basically do simulations of brain outcomes without having to actually take up people's time and effort and energy to do those things in the lab and also quite disruptively. And so, the brain model can help us sort of automate those processes in a low cost and flexible way.
Right. It reminds me of something Laura Gwilliams was telling us about the idea that with the language models, you can actually go into the computational neural network architecture that's underlying the model and deactivate some neurons, digital neurons, and see the effect, sort of do these digital lesioning experiments. And so, I imagine that you could have something similar here where you're like, okay, well we've now basically downloaded the algorithm as you put it or reverse engineered maybe the algorithm into this digital model, and now we have a system that we can use as a model. Again, like Laura was putting it a model organism. It's a digital model organism that you can go in and have so much more control, so much more ability to modify whatever you want, see how that affects things, whether that's understanding what the circuits are normally doing or testing out potential therapies.
Exactly. Yeah, I mean the brain is like a network. So, when you put information through it like sets off a cascade of responses through different parts of the system and also at the output, if you sort of follow that back up the chain, you can see where things come from. Basically, the digital brain model gives you the ability to simulate those pathways and learn a lot from being able to do so. For example, I don't know if people are familiar with TMS technology, transcranial magnetic stimulation. That's a technology that people use right now to treat depression in drug resistant cases. That's a device that basically puts a very gentle magnetic pulse into a part of the brain, and that seems to really help reset some of the neural circuitry to help people recover from depression. But those tools are, while very novel and very exciting, people don't know how to point them exactly right now.
Right.
So, what the optimal ways to configure the TMS device so as to be able to achieve the best and effect with the least intervention, that's something that people would love to be able to know how to do. A simulator model, basically a digital brain model can be used to support exactly that kind of optimal treatment design strategy. Basically, you want treatments that are the least amount of time and that are the least disruptive to the brain, but also really effective at helping reset the circuits so that people can get out of a depressive loop. That TMS has been effective at that to some extent, but there's so much more to be done to be able to focus those things to really help people in the least invasive, most effective way we can.
I love the terminology you're using of a brain simulation. I think that's a really intuitive way of thinking about this. You're simulating how a brain might act. And I want to be clear, this is something that does not exist yet as a whole brain wide simulation. It's something that I know you've been talking about that folks like Andreas Tolias at Stanford have been talking about and others. And I want to get into a little bit the question of, well, how would you build something like this? But first maybe to make sure that listeners understand what is the thing we're talking about? What is this a foundation model? What is this kind of generative model? How would it work?
I wonder whether it makes sense to go back and talk a little bit about the brain modeling that you've been doing for many years and that neuroscientists have been doing for many years in computational neuroscience. So, you mentioned something to me last time we talked, which is you've observed that when you train computational neural network models to perform a task, the better they perform, the more human-like or the more brain-like their performance is, the more the actual activity of those models starts to look like the equivalent activity in the brain. And I wonder what is the jump that you're making between those kinds of models, the kinds of models that neuroscience has been using for decades and what we're now talking about with the foundation models, these kinds of architectures that have come out in the past few years?
Well, okay, so one of the things that's been really exciting over the past 10 years is that models from artificial intelligence originally created to do tasks, visual recognition, auditory recognition, even language tasks have been shown to be pretty good quantitative models of brain areas that are known to support those tasks. That's been the nexus, the core driving result in this burgeoning neuro AI field, neuroscience AI models that come out of artificial intelligence end up being surprisingly good quantitative predictors of brain data with their internals. Basically, you have a neural network. It's trained to do some tasks, like take an image and report out something about the image, the category of some objects in it.
Is this a cat or a dog? Is this a car or a building?
Yes. A cat or a dog or where is the cat or what pose is the dog at or what pose is the hand at or other things. There are many different tasks that people try to solve in computer vision. Anyway, it turns out that computer vision models, they have internal layers and states and stuff like that, and you can ask how good are those internal layers at predicting neural responses in the middle, the kind of internal parts of the brain that do that task. And it turns out surprisingly perhaps that AI models are quite good models of the brain in a kind of fine-grained quantitative fashion.
So, the artificial neurons seem to be acting kind of like the biological neurons.
Yeah, that's right.
Or maybe the computations are somehow similar.
There's a lot of debate as to what it means for computations to be similar. So, I shy away from that statement, but what I would say is just that you can measure brain data the way we normally do by measuring the responses of neurons in response to stimuli or in different conditions, and you can measure neural networks as if they were candidate brains. And then you can ask quantitatively how correct are the AI models at describing the brain data? And actually, it turns out that good AI models are by far today's best state-of-the-art quantitative models of the actual brain. Even though they weren't trained on any kind of brain data in a direct way...
Right.
They end up learning things that make their internals look like the different parts of the brain in a pretty strong way, pretty good mappable way.
And does that suggest to you maybe there's an optimal way of doing this, that the brain and the AI models are converging on the best way of doing this particular task?
Yeah, it suggests that there's not too many different solutions to the tasks that the brain needs to do. Another way to say that is in an evolutionary sense, the brain is highly constrained by behavior. So, we as humans do lots of challenging things as a job. You have to do those jobs well, and there's not that many different possible solution types. So, if you figured out how to solve it with AI, then you end up having to basically stumble on the brain-like solution. And so, it's a kind of principle of convergent evolution in a way. And I think that that's turned out to be pretty powerful driver of connection between artificial neural networks and real brains. That's the core foundational principle of neuro AI field, that is the preconditions under which the digital brain work can really thrive.
So, tell me about that transition. So, we're saying that if you train sort of a general AI model, it's got neural networks sort of in the way of the foundational architecture of a lot of these networks. If you train it to do tasks, it starts to look more like how the brain functions. So, tell me about the leap to now let's actually train that model on actual brain data. Let's train it specifically to act like the brain. How are these models that you're talking about different and what's the additional value that gets us?
Well, so earlier you said that when you were talking to Laura, you heard about the ways large language models are kind of potential model organisms, but that the models weren't exactly like the brain. And I was going to interrupt you and I decided not to, but I was going to ask you, how do you know they're not like the brain?
Right.
I mean, probably they're not perfectly the brain, that's totally true. But actually, one way to interpret what I just told you about AI is actually that you get surprisingly the brain by doing an AI task. And actually, people have shown that LLMs are by far our best model of language areas in the brain today. So, to some extent, the AI models are like the brain, but they're not certainly not perfectly like it. And of course they're not, aside from the fact that they're in silico and all that, they were optimized for probably quite different things. The brain was built by evolution, by social interactions and all these different real world things over the course of eons effectively. The AI models have some of those same constraints. They're optimized. They're learned to solve human-like tasks in some way. Predicting text on the web has got some social component to it, and it's got some descriptions of the worlds in it and all these things. And so, it's not totally dissimilar to the way that the brain is trained probably, but it's not the same by any means.
And so, the leap from training a network to solve a task and then asking do its internals of the neural network look like the true brain to just directly training the neural network on brain data is the leap from going from a theoretical science to an empirical science. When we find that neural networks that solve a task look like the brain, that gives us a theory of the brain. It says the brain is like it was optimized for the task that we find that the neural network is good at. And that's really cool, and it's very exciting because it tells us an underlying principle for why the brain is as it is, why the neurons are as they are. But what it doesn't do is optimize the connection directly between what we really find in the brain and the neural network. And that means that there's some limitations on those models as correct brain models. Naturally, there would be.
Right.
And so, that leap that you're talking about is trying to close that gap by instead of focusing on an AI task and then seeing afterwards, how good is it as a model of the brain, instead directly go for establishing that link and making the models as we can.
That makes a lot of sense. So, I now want to get back to that question I raised earlier. This is not something that exists as a whole as you're describing it. I think pieces of this are in process. People are working on things like generative models or foundation models for the visual system or other systems in the brain. What would it take to actually make this a reality? What would need to go into a model like this to make it useful?
Well, as you say, pieces of this have been things that people have been working on. The AI field as a whole for the last 15 years has been building models of different parts of the brain. What's really needed to move that to this much more biologically direct link is to have a lot more brain data. The reasons why we weren't doing the direct match to brain data from the beginning was exactly that we didn't have anywhere enough data to do it. So, people may be familiar with the idea that training neural networks takes a lot of data. Large language models are actually trained on much more text data than any one person sees in their lifetime by a large fraction like 10,000 times that or something. And so, their models are quite data hungry. I don't think we're proposing to go to the scale of the internet essentially with our brain data. That would be hard.
But we now are in the situation where the ability to scale up brain data collection tremendously is something that's burgeoning right now and is happening in a number of really exciting ways. And so, the idea is to collect that data and that larger scaling up of existing models that have been sort of at medium scale so far is a really important piece of closing that gap, of getting something that's enough data from people that the model has a chance of being, as I say, not in any individual, but capturing enough about the common features of people that it can be used as a simulator for the kind of biomedical purposes that I mentioned earlier.
And we're talking about having a large cohort of folks with fMRI data, other kinds of data, and it would probably have to include what are the people looking at? What kinds of videos are they watching? What kinds of behaviors are they doing? So, I guess my question is how comprehensive does this need to be to capture the essence of the algorithm that you're interested in?
That's a good question. We'll try to make it as comprehensive as we can. We want to basically collect data from high resolution MRI recordings as well as another technique that's called MEG, which is putting a magnetic cap on the head and it's a non-invasive technique for recording high frequency temporal signals from the brain. So, between the MRI data and the MEG data, you could triangulate what the responses are pretty well. And as you say, the subjects who would come in and volunteer to have their brain recorded as many people do now would come in into this experiment and be recorded just looking at stimuli, doing basic visual, auditory memory, attention tasks, motor tasks, things that are just everyday life types of activities.
And of course, we'd like it to be as comprehensive as we can. So, we're targeting the things that seem easy about being human, understanding what you're seeing when you're looking at it, understanding the words that you hear, being able to move your arms to grasp things and open bottles and stuff like that, or remember some basic things that you've just heard. Not really the stuff that's hard about being a human. So, I don't think that our models will rival real humans in terms of their complex decision-making capacities for large life events or anything like that. So, we're not targeting that level of complexity, but at a somewhat lower level of complexity, pretty comprehensive.
But you could probably see something about how watching a sad movie makes you feel sad.
Right, exactly.
Lot of the connections between those things.
Yeah, exactly. So, things that are a little bit more automatic for humanness essentially. I don't think we're going to be able to use this model to be a therapist or something like that.
Right.
But what we would be able to do is potentially use it to detect some of the core things that you would use to think about your everyday life.
I was struck by something you said in this panel discussion that we had at our Neurosciences Institute Symposium back in the fall, which was very much about this neuroscience AI intersection. You said something along the lines of sort of mathematics became this enabling language of physics in the 20th century. We realized that the world works in a mathematical way and you can express the physics of the universe through math. And you said that neural networks could be seen as potentially the enabling language for understanding the brain in the 21st century.
Yeah. So, it's like there's this famous line from somebody, I can't remember who said it but, "There's an unreasonable effectiveness of math for describing physics." The reason for that is somehow that physics is very low level and has to kind of work out over the femtosecond timescale. So, it's really got to be running, so to speak, very simple rules. Physics is like the application of very simple rules at all scales to the universe all at the same time. That type of very simplified setting lends itself to very precise mathematical analysis. And so, the tools of mathematics describe the actual operation of cosmological things in the universe, and that's really powerful.
And that tight link started with Newton, and it sort of worked its way through physics for centuries, and it reached its sort of zenith in the middle of the 20th century and was incredibly effective. You could sort of predict things ahead of time just based on mathematical regularities, and then lo and behold, they would exist.
They captured the algorithms of the universe, in a sense.
The algorithms of the universe, which are really simple because they have to operate like a trillion times a second, essentially. They're very simple algorithms. And so, mathematics is very good at allowing you to analyze those very simple rules. For example, you can predict on purely mathematical grounds that there should exist this special particle called the Higgs boson. And lo and behold, you can do a billion-dollar experiment and you can find it. That was a really incredible fact about the way that the world works. But that's been very elusive in biology, and that's in part because biology is a lot dirtier than physics. It's much more complicated algorithms that unfold over the course of hundreds of milliseconds, which sounds short, but it's a lot longer than the way physics operates. And I say hundreds of milliseconds because that's the timescale over which humans do visual processing.
But of course, over longer timescales as well, like the timescale of learning to do a new task that could be minutes or days or weeks or whatever. And then over the course of the course of evolution as the system is built, and that can take eons. And so, the algorithms of biology are much more complicated. It's been a struggle to find a language that describes them well, but we've kind of come to see that in neuroscience at least, if not biology more generally, the language, the techniques, the tools, the mathematical structures of neural networks and the machine learning idea that you train those networks to do things in dirty, unconstrained, real world data, but you can still pull out a tremendous amount of regularity from it. That's turning out to be a very good analogy for how learning and evolution happens to the real brain.
Neural networks have neurons in them, artificial neurons that are connected with artificial synapses and which are fed by artificial sensors and which act by artificial motor actuators effectively. That is like a framework for building input output processing devices with small, robust, high failure tolerant components like neurons connected in complicated ways that are learned by getting the system to improve itself by looking at its behavior in the world. That is a really natural language for describing the constraints that happened to the real brain. And it's turned out that not only is that conceptually reasonable, but actually predictively accurate. The fact that neurons existed in the brain and there were these little cells that were connected to each other through synapses, and that the neuron would spike and then that would cause downstream neurons to spike.
That was already discovered by Ramon y Cajal back in the late 19th century by just looking at the way the brain looked. And so, when people started to build artificial networks in the mid 40s, that was already clear that that was the inspiration, and that was a conceptual thing that was interesting and motivating for many decades. But in the past 10 or 15 years, it's become clear that if you take the artificial neural network idea and you do it at scale, it actually starts to be a quantitatively correct description of brain data. That is what you want out of a framework. Not only that, it should be sort of conceptually reasonable, but also that you can use it to make quantitative predictions. And that's the way the sense in which the neural network framework has been able to start describing the brain.
And that's by analogy a little bit like how physics was impacted by mathematics, although it's more like instead of a mathematical theory with theorems and formulae, it's more like a computational framework where you build your artificial network and you check out what it does computationally, and then you can ask empirically, does it look right? So, it's both, and not like the physics case. It's like the physics case in the sense that there's both a conceptual and tight quantitative relationship between the computational models in the brain or the theoretical models and the physics data. But it's unlike it in the sense that instead of us writing down equations that we can work through in a pencil and paper way in a notebook, we have to actually run it on a computer on the brain side.
The brain and the brain's networks are such complex systems and maybe even chaotic systems in the sense that perhaps they're formally unpredictable. Maybe you need a complex system to find out how it's going to work. We're almost out of time. I would love to hear you just paint a picture for us in just a minute or so of what it would be like to run a neuroscience lab in 10 or 20 years when we have these foundation models of the brain, these large brain models.
Right now, neuroscience is transitioning from being a single lab driven pre-theoretical science to being a more mature science where results from one observation strongly constrain others, and also where the scale of projects is beginning to go beyond what any one individual lab can do, and where there's like a tight coupling between your theories, in this case, neural network based theories, and the actual data that you would both collect, and also that the experiments you would design to figure out what to do next. So, what I'm hoping will happen is that maybe, let's say, 5 to 10 years in the future, more neuroscience labs will be using shared models, and we'll be contributing data to the creation of those shared models in a more self-organized, but collaborative fashion.
So, as opposed to each lab having its own theory and its own data and its own separate siloed operation, I think what's happening will be the emergence of more shared structures. If you look at the particle physics, example from CERN and the LHC and the discovery of the Higgs boson, that's a very coordinated effort at a much larger scale where many people work together, where they have common theories and where the data from one thing really strongly and quantitatively constrain the other work. And I think we will be moving toward that. I also think that the kinds of questions that people will ask will be more formulated in terms of using the neural networks as hypotheses for the brain.
And so, the theories can be turned to ask interesting questions about how you think the brain works and test them in a theoretical and predictive way first as opposed to first get the data and then make the hypothesis. I think it'll start to go the other way a bit. And then maybe, most excitingly for me is I think that neuroscience has been pretty disconnected from its ultimate applications for a long time. The whole purpose of neuroscience in the long run is to help people with their brains effectively.
Right. We all have brains. We'd like them to work well.
Dan Yamins:
We want them to want to keep them healthy, and we want them to work well. And so, I think that the actual use of the neuroscience experiments and models to help do treatments and diagnoses and things like that, I think that's going to start to become much more real, and I hope that will be the biggest outcome because that would justify the whole endeavor.
Nicholas Weiler:
Well, fantastic. Dan, this is such a fascinating conversation. I'd love to keep talking about this. I think we're out of time for today, but I'd love to have you back and keep talking about these issues. Thanks so much for joining us.
Dan Yamins:
Yeah, thank you so much for having me and for asking such great questions.
Nicholas Weiler:
Thanks again so much to our guest, Dan Yamins. He's a Wu-Tsai Neurosciences Institute faculty scholar and an associate professor of psychology and of computer science at Stanford. To read more about his work, check out the links in the show notes. We'd love to hear from you. Send us a note at neuronspodcastatstanford.edu. And if you're enjoying the show, please be sure to subscribe and share with your friends so we can bring more listeners to the frontiers of Neuroscience. From Our Neurons to Yours is produced by Michael Osborne at 14th Street Studios with Sound Design by Morgan Honaker. I'm Nicholas Weiler, until next time.