Flash and JavaScript are required for this feature.
Download the video from Internet Archive.
Summary: Continues the discussion of genes vs experience on cortical organization, and whether the cortex can change in adulthood.
Speaker: Nancy Kanwisher

Lecture 11: Development, Na...
[LOGO SOUNDS]
NANCY KANWISHER: So I'm doing another one of these big mongo lectures that takes a whole week, so this is really a continuation of last time. This is the outline for the whole week. We got through most of the stuff on face perception. I'll do some more today. We're right there. And we're going to go on and consider this question of, what's innate, and how do you wire up brains?
So first, a brief recap of main points from last time. What, if anything, is innate about face perception? We considered lots of different kinds of evidence, behavioral and neural. And the bottom line is, maybe not that much. So there's a few things that are sort of suggestive, like newborns have this bias to look at faces more than other non-face stimuli that are pretty similar-- schematic faces versus scrambled schematic faces. And that's suggestive. But then there's the possibility that that's just due to some very, very simple property of those stimuli, namely just having more junk on the top than the bottom, like eyes on the top than bottom. So what would have to be innate in that case would be just the simplest possible template, not even a whole face.
Similarly, we showed that there's actually very good discrimination of one face from another, even across viewpoint changes in newborn humans, and also in monkeys that were raised without ever being allowed to see faces. And both of those things suggest innate abilities to process faces, but in both cases, it's possible to argue that that ability isn't due to face mechanisms in particular. It's due to just general vision and shape perception.
Third, I showed you beautiful recent data showing that the face patches in monkeys don't develop if monkeys are reared without ever seeing faces. Which also suggests that maybe not that much is innate. So all that is fine, but then there's a big, wide open question that's left unanswered by all of that, which is, how do the face areas know to land right there in everybody, robustly? That really feels like something has to be innate about the brain, at least, to say where those things should go.
OK, so one possibility that I'm sort of skipping over, because it's a whole little universe, and there isn't an answer yet-- people are working on it right now, people in this building are working on it right now, but the gist of the idea is that maybe what's innate is some other kind of simpler selectivity. Maybe like selectivity for curved things. Remember how I talked about, as you go up the visual system, you start with selectivity for spots of light and then edges? Well, maybe up there, you're born with selectivity for curved things, or something like that, that is face-like enough that somehow that leads face selectivity to land there later. It's kind of vague because nobody really knows, but that's an idea.
Another possibility that we'll talk more about in a moment is a possibility that the reason your face patches land right there is something about the long-range structural connectivity of that region to the rest of the brain makes that the right place. And so all of this is very actively being investigated, and nobody knows the right answer here. Further, I just want to mention that deep net modeling is just very suddenly in the last year become a very powerful way to approach these same questions from a different angle. So with deep nets, you can ask, what do you need to build into a network to get it to produce face patches? So that's a way of asking, in principle, in a network where you can actually control everything about its architecture and about the stimuli it sees, what are the necessary conditions for it to produce something like face patches? What do you have to train it on to get it to produce face patches, and to be able to recognize faces? And at the top level, why, computationally, does it make sense to have face patches in the first place? This is kind of the biggest question lurking in the background of this whole field. I'm describing all of these specialized mechanisms in mind and brain, but really, wouldn't it be nice to know why our minds and brains are organized that way, rather than just that they are? And that's a really hard question, and I think there's a real hope now that computational modeling may get us toward an answer sometime in the next decade, maybe even the next few years.
OK, so that's the overview. I now want to go on quite a discussion about this notion that preexisting connectivity may be a major constraint in wiring up the brain. So first, we need to talk about, how would you look at structural connectivity in human brains? And I haven't really talked about this yet. The main method for being able to look-- for being able to get some sense of this in human brains is to use another kind of MRI imaging. Uses the same machine that's an MRI machine, but it's going to produce anatomical images that show us not those nice pretty pictures of brains that you're used to, but that show us the direction of water diffusion.
And so the principle is pretty simple. Here is a picture of an optic tract. And what it's showing you is that if you see, an optic tract is a whole bunch of axons oriented like this connecting retinal ganglion cells to what? Where do the retinal ganglion cell axons land going through the optic tract? [INAUDIBLE].
AUDIENCE: LOG?
NANCY KANWISHER: LGN. LGN. Lateral geniculate nucleus of the thalamus. So there's that fiber bundle. But the main point for now is that you can see that each of those fibers has a layer of fat around it, and the upshot of all of that is that water likes to diffuse more in this direction than that direction. That's the key idea of diffusion imaging. It tells you which direction water is diffusing most. Water is constrained by the fat layers around those axons, that myelin. And so you get diffusion more in this direction than orthogonally to it.
And so the details of the physics of this kind of imaging, which I'm totally not explaining, are such that what you get out is a picture at each point in the brain of what is the direction of maximum diffusion at that point. And so here's a little picture of lots of little vectors saying, at this point, water wants to diffuse this way, or this way, or this way, or this way. Everybody with me so far? So you get a whole bunch of little teeny vectors all through the brain showing you the orientation where water wants to diffuse at that point. And the idea is that's telling us which way fibers are going at that point. And we can therefore infer-- we can follow these things using a method called tractography, where we just follow those little vectors through the brain.
And that's what's happened here. At each point in the brain, you start at one point, and you just follow these vectors and see where they go. Does that make sense, sort of intuitively? I'm skipping over lots of details, but I want you to get the gist.
OK, so these beautiful pictures that you may have seen before are diffusion tractography. They show you our best guess of the long-range connections between one part of the brain and another based on diffusion tractography. And on the theory that you should wear your data whenever possible, here's mine from my lab. Whoops, I'm tangling it here. So-- I love these things, they're so beautiful. One of my post-docs who's our tractography whiz gave me this beautiful scarf. Isn't this nice?
And so you can see even more clearly here that this is a cross-section through the brain in this axis right here. And so these big green guys are the connections that go from the back of the head down the temporal lobe, down the visual pathway that we've been talking about all along. OK, that was gratuitous. I just thought it was fun.
OK, so tractography is cool. It makes gorgeous pictures and gorgeous scarves. And it works really well to discover big fiber bundles. There are lots of parts of the brain I showed you with that gross dissection picture last time, that there are big chunks of white matter where lots and lots of parallel fibers go like this. And tractography works well to find those. You can really see those very nicely with diffusion imaging.
However, it's not so hot for discovering finer connections. It's better than nothing, but there's a lot of ways in which it fails. So for example, if you have water-- if you have fibers crossing in some part of the brain like this, you'll get diffusion in this direction and this direction, and the tractography algorithm will be finished. It won't know whether to keep going straight or whether to turn. So that's just one of many reasons why diffusion tractography is lovely, and wonderful, and the best we have in in-vivo brains, but it's not so great. Anyway, it's all we have, so we use it.
OK. So we can use tractography to ask, for example, is the long-range connectivity of the fusiform face area distinct from the long-range connectivity of its neighbors? In other words, on this idea that that patch of cortex gets wired up to be a face area, somehow because of the connectivity to and from that region to other parts of the brain, then we should predict that that region should have different connectivity than neighboring cortex. Otherwise, connectivity isn't enough of a signature to tell us where to put a face area. I'm seeing blank looks. Is this not making sense? OK. Just butt in and ask questions if I'm not making sense.
OK. So question is, do these connectivity fingerprints predict the location of functional regions, first in adults? If we don't see it in adults, then the jig's up. So let's start with adults.
OK. So the way that you can do this is, for each voxel in the brain-- this is a big one, so you can see it. It would actually be a couple millimeters, wouldn't show on this picture. What you do is you follow that tractography and you say, oh, look, it went there, and it goes there, and it goes there. And you tally how often, when you start here, you land in each of a bunch of different big anatomical chunks of brain. That gives you a description of the connectivity fingerprint of that voxel. How strong is its connection to each of these other remote regions in the brain? That's what I mean by a connectivity fingerprint.
So now the question is, can you use this connectivity fingerprint to predict what the function of that voxel is? That is, is the connectivity distinctive enough that, just based on diffusion data, we could say, what does that voxel do? If the fusiform face area has a whole distinctive connectivity fingerprint, then we should be able to predict it. Does this make sense?
OK, so that's the question. And there's a lot of math, which I'll skip. I'll just give you the gist. So what we're trying to figure out is, is the fusiform face area distinct from its neighbors in its long-range connectivity? That's the question.
And, in fact, it is. And we can show that. Again, I'm skipping over some details, but here is a recently-published paper that shows you in ways that should be familiar now, this is functional MRI activation for faces versus objects. Fusiform face area, that's probably occipital face area, another region we'll talk about later. The face patches. The usual face patches. Again, this is an inflated brain, so the dark bits are the bits that used to be folded up inside the sulcus until they were mathematically inflated.
So that's the standard thing we've been looking at. This is the prediction based on diffusion tractography alone in the same subject about where the face patches should be. So very roughly, what you do is you take some other subjects, and you train them up on connectivity fingerprints-- it's kind of like NVPA, but you train from diffusion data, and you try to predict face selectivity. And then you take the diffusion data from a new subject, and you predict where that face selectivity should be, and there's where it's predicted for the same subject, and it's pretty damn good. Did everybody get the gist of what I just went through? You don't need to remember every detail. The key idea is, is there a systematic relationship between long-range connectivity of a voxel and its function, its selectivity? And this says yes for faces. OK?
So that's the case for faces. That tells us that in adults, those face regions have distinct connectivity. This is the same thing. I just shrunk it so I could fit in other stuff. Here is doing the same thing for scenes. Functional selectivity PPA RSC, functional selectivity for scenes measured with functional MRI, predicted functional pattern from the same subject with just tractography alone. OK? Do you have a question?
AUDIENCE: Oh, no.
[INTERPOSING VOICES]
NANCY KANWISHER: It's pretty good, isn't it? Yeah, yeah. No, I was dissing diffusion. You might be thinking, OK, I was dissing diffusion tractography. It sucks. It has all these problems. It has all these ambiguities. So how could it work so well? That's a good question. I don't know the answer to that. I think in part, it's because you're predicting based on all of these different connections. So even if half of them are wrong, you can still get some predictive power out of it. That's just my guess. OK?
OK, so it works pretty well for scenes, and it works pretty well for body selectivity as well. Functional MRI prediction from connectivity. So that's cool. So that says, these all have distinct connectivity fingerprints, but now this is all done in adults. And remember, the way we got into this long shaggy dog story is to ask what these long-range connections, what role they might play in development.
Remember that I said last time that most of the long-range connections of the brain are present at birth. So that suggests that maybe these connections are also there at birth. And it suggests that maybe indeed those connections could play a role in development. At least they're probably there. They're in a position to play that role, if that's actually what happens.
So all of this brings us to the case of rewired ferrets. What? What am I talking about? They're cute, aren't they? They're also very good experimental animals to address just this question. And Mriganka Sur in this department did this very important paper a while back where he asked whether connectivity instructs functional development. That is, whether the connectivity present at birth is sufficient to determine the function of the region that has those connections.
And he did this by manipulating connectivity. So if you want to ask, what is the causal role of x, you have to manipulate x. So we've talked a lot about this in this class. Functional MRI, wonderful. You see activity. You have no idea what its causal role is until you mess with it. For example, by electrically stimulating the brain.
Similarly, connectivity may be present at birth, and may predict where we may be able to use it to predict where the functions land. It doesn't tell us that it's playing a causal role. The way to find out if it's playing a causal role is to change it and see what happens. And that's what Mriganka Sur and his colleagues did.
So they used ferrets because they're born very prematurely. And so what that means is that you can operate on them surgically right at birth before they have any visual experience. They haven't opened their eyes yet. And you can-- turns out-- reroute some of the connectivity.
OK, so this is a diagram of some bits that should be familiar. The retina going to the lateral geniculate nucleus and then up to V1. Also true in ferrets. In addition, we have primary auditory cortex that we'll talk more about in a few weeks. So just like V1, but for hearing.
A1. A1 also goes to another nucleus in the thalamus. This one called the medial geniculate nucleus. And then it goes from there up through a complicated chain, eventually-- oh, sorry, it goes this way. Thalamus up to A1.
So that's the basic wiring of an adult ferret. And so what Sur and his colleagues figured out how to do is redirect some of those connections by surgery at birth. So this is a wiring diagram of the same thing shown here. Retina, LGN. This is V1, it's also called 17. And here is medial geniculate and auditory cortex.
And so what they did was to surgically knock out a few of these connections here in the just-born ferret pups. And what happens is if you knock out this connection here, the fibers that start this way get rerouted, and you end up with a ferret that's wired up like this. The important part of this is this rewired ferret has a connection between their retina and medial geniculate nucleus that goes to primary auditory cortex. So we're taking visual input at the periphery and wiring it up into the auditory system.
And the point of all of this is now, primary auditory cortex in this developing ferret will be getting visual input. And so if the input were sufficient to determine the function of that region of cortex, then what should we find in these rewired ferrets? What should happen in what would have been primary auditory cortex? What should it do? Christine.
AUDIENCE: [INAUDIBLE] visual--
NANCY KANWISHER: Yeah! It should behave like visual cortex, absolutely. If everything's determined by the inputs, we change the inputs, it should behave like visual cortex. Well, that would be freaking crazy, wouldn't it? I mean, it's miles away in the brain. It's a totally different part of brain. That will be nuts. But that's what happens. It's pretty amazing. This is a really important study. OK.
All right. So what you find, first of all, is that primary auditory cortex in the rewired ferrets responds to visual input. That's cool. But you might say, OK, you wired visual input in there. Of course it's going to respond to visual input. So maybe that's not too cool, but not too surprising.
But the next part is really cool and really surprising. Remember how I said that in normal visual cortex-- in humans and monkeys, and also ferrets-- you get these orientation columns. Now, remember, these are-- what this shows is that as you move across the cortex in V1-- we're now talking visual cortex here-- in visual cortex, in normal mammals, you get this smooth progression of orientation selectivity as you move across the cortex. And that's what's shown here. Everybody with the program? OK. So that's normal primary visual cortex in an adult animal. What do you think primary auditory cortex looks like in the rewired ferrets? Damn similar.
So not only do you get visual responses in what would have been auditory cortex when you rewire, you get orientation columns. You get this really fine-grained structure of what everybody thought this was something about visual cortex. Well, this says that visual input is sufficient to produce orientation columns in a part of cortex that otherwise never would have had them. Does everybody see how mind-blowing this is? OK.
So that's pretty cool, but now we get to the really cool question. When these neurons are active, does the ferret see, or do they hear? OK. It's rewired. It's getting input from the retina, but there's neurons in what would have been primary auditory cortex now responding to visual input. What does the ferret think is going on? Does he say, oh, that's sight, because he's learned that visual input means that's sight? Or does he say, I hear something, because that's auditory cortex. Everybody in the grip of what a cool question that is?
OK. And so it could go either way. There's really no way to tell in advance. It depends on how you read out the information in that piece of cortex. When we do NVPA, we sit god-like by, and we look at a patch of brain, and we decode what's in there. But really, what's happening in the brain is some other part of the brain is getting input, and decoding, and interpreting it. And so the question is, what do later parts of the brain make of this? And the answer is the later parts of the brain learn that that's visual information, and the ferret reports seeing stuff, not hearing it.
Now, you may be thinking, how the hell do you ask a ferret if he's seeing or hearing? What you do is you use non-rewired parts of the same ferret's brain. Actually, forget if it's the other hemisphere or a different part of the visual field that doesn't get rewired. So you have gold standard, where normal vision is working, and normal hearing is working in the ferret, and you train him, press this button when you see and press this button when you hear, and it's unambiguous. And then once he's trained, you stimulate those A1 neurons and you ask him what's going on, and he says he sees something. OK?
All right, so this is one of the true classics. OK. So this means that A1 in this case, primary auditory cortex, is instructed by its connectivity and by the experience that comes through that connectivity to shape its function. Everybody got that? All right.
So both experience and connectivity can determine cortical function, at least in ferrets. What? Yes, question.
AUDIENCE: I have two questions. So first of all, what does their V1 look like after this rewiring, and also, can they hear things, and if so, where is it?
NANCY KANWISHER: Yeah, absolutely. OK, so if you look at the diagram, there is additional-- well, actually, it's not in the diagram. But there is additional input that's not shown here. So they can hear things through maybe the other hemisphere, I forget. They can hear.
And they can see, because notice-- that's right. OK, we blocked off area 17, but these guys are higher-level visual areas. So they can see both through their non-rewired hemisphere and through some other bypassing connections to other parts of visual cortex. Probably, both of those are going to be affected. Your vision is going to be different if you bypass V1. But there will be at least some visual information.
OK, so that's ferrets. Again, animals, you can do invasive studies and really do the strong manipulation to do a strong test of a causal role, and this is a classic example. Of course, we can't rewire humans-- or we could, but it wouldn't be nice. But really, we want to know, how does all that stuff get wired up? Are these regions also-- is their function determined by their connectivity present at birth, and due to the experience of those regions have?
OK. Well, we can't do controlled rearing studies in humans. We can't rewire their brains. But we can be clever and smart and think of other cases. So here's an important test case. The important test case is the case of reading.
Why reading? Well, one, we all spend a lot of time doing it. And two, humans have only been reading for a few thousand years. And that's not long enough for natural selection to have crafted an innately-specified circuit just for reading.
So that means that if we did find a patch of cortex that responds selectively to visually-presented words, or letters, that would suggest that for that case at least, experience was sufficient to wire up, to determine the function of that region of cortex. This is all very hypothetical. Everybody got the idea? OK.
Now, notice, this does not apply to hearing words. People have been hearing words for hundreds of thousands of years, perhaps millions. And so that's plenty of time for special purpose circuitry, and that special purpose circuitry exists and we'll talk about it in a month or so. But now we're talking about the case of visual word recognition-- this recent cultural invention of humans. So that's why it's a special case, because we know that's too recent to be innate. And so if we find a selectivity, it can't be innate. All right? So that's what I just said.
So do we have such a thing? Well, how would you test for it? What would you do? Joseph, what would you do? You want to know if there's--
AUDIENCE: I guess I would show them words, and then show them not words, and see--
NANCY KANWISHER: Yeah. It's not rocket science, guys. We just keep doing the same damn thing. Exactly. Right.
So start by-- here's what we did. We showed people visually-presented words like that, and we showed them line drawings of objects. And when we did that, we found that in most subjects, there's a tiny little patch of the bottom of their left hemisphere right near the zones we've been talking about, near face selective and other regions on the bottom of the brain. But that tiny little patch responds significantly more to words than pictures.
Now, we won't do this now, but you can do it as a thought experiment. What are the alternative accounts of that activation? Has this shown that that region is selectively involved in reading? Of course not. There's a million differences between-- oh, come on-- these and those. How bright they are, how big they are. It's a million differences. And so to get serious about it, we have to do the same game that we've been playing all along in this course. This is like a first whack at it. You find something, now we have a candidate. But if we want to get serious, we've got to test some other conditions to see if that's really for real. OK? All right.
So here's what we did in my lab when we did this a while back. So first of all, this is left-out data. Once you find that region-- remember, if you're trying to characterize the function of a region, I talked briefly about this, a good way to do it is to run a localizer to find that region in each subject. Now we found it. Now we have those voxels. Now we collect some new data that may be a lot like our localizer. It doesn't matter. We collect some new data and we look at the response. And that just puts us on stronger statistical footing.
OK. So here is time going this way. This is something called an event-related design, where you just present a single stimulus, and then wait, and another stimulus rather than a whole bunch of them mushed together in a block. And then you average over many, many repetitions. And so this is the response over time-- it's seconds, it's really slow-- to words and line drawings in that region. So this is just replicating what I showed you before. It's showing you what the actual selectivity looks like in the real data, not just in a significance map.
Why is this thing taking six seconds to respond? This is stimulus onset out there. Yes.
AUDIENCE: That's the time between blood flow?
NANCY KANWISHER: Yeah. Remember, the signal we're looking at is based on blood flow. The neurons all fired right here, but it takes a while to get the blood flow to change. That's why it's delayed. Exactly.
OK. All right. So what else are we going to test? Well, you can do lots of different things. We just tried lots of things. We said, OK, let's have other things that are symbols but that our subjects can't read. So we tried Chinese characters, low response. We tried digit strings. Pretty low response. That's pretty remarkable, because words and digit strings are pretty similar in how we use them and what they look like. So that's pretty good. We tried consonant strings, like this, that you can't pronounce. And we got the same response.
And this is important. It tells us this region is not a word region. Instead, it's something about recognizing letters. But for the purposes of the current argument, that's OK. It's still something that has no basis in human evolution, and so if we find selectivity for letters that are presumably used in the process of reading, that must have come from experience. OK? What else did we do? OK, that's what I just said.
Now, I submit that this is a pretty good argument that that region must have been wired up by experience. But you could niggle. You could say, well, there are more straight edges with the words and consonants. The digits are curvier, or whatever. You could make up some story about how that isn't necessarily selective for letters and words, and therefore, maybe it's not necessarily wired up by experience.
Further, who knows? Maybe everybody just has that weird selectivity in there even if they never learned to read. So it would really be nice to make a stronger case. And what we did was we couldn't find people in Cambridge who couldn't read, who didn't have other things going on, but we could find people who did read Hebrew. And we had-- where's my Hebrew data? All right, hang on. OK, right.
So here are our non-Hebrew readers. This is funny. This is an old graph. It's not so impressive-looking. This is-- I forgot to switch out our newer data. OK, so what we found is in people who don't read Hebrew, the response was lower to Hebrew than to words. Looks like it's almost as high, actually. When we ran more subjects, it's actually quite a bit lower.
Nonetheless, when we ran people who read both English and Hebrew, the Hebrew response is higher. And that nails the case that it's actually that individual's experience that determines the selectivity of this region. It depends on what orthographies you know. If you know how to read Hebrew, you get a high response. If you don't, you get a lower response. Everybody get that this pretty much nails the case?
OK, so where are we? All of this was to say, do we ever see selectivity in the brain that can't be innate? And I submit to you, this is selectivity in the brain that can't be innate, that has to be learned. And in fact, our data show that it depends on the subject's experience.
OK. So-- good. So yes, we have such a thing. It's called the visual word form area. Now, what about this idea that connectivity of that region is playing a role-- it's in a very systematic location. It's that little orange thing right there. Yes, question.
AUDIENCE: Question. I'm just trying to think through the alternative. The brain has to be shaped by experience, otherwise you would never learn anything, right?
NANCY KANWISHER: Absolutely.
AUDIENCE: Even if this didn't show that difference, it would just mean the difference is something you're not measuring.
NANCY KANWISHER: Absolutely, absolutely. You wouldn't be able to understand the sentence I'm saying right now without changing your brain, because by the time you get to the end of the sentence, you need to remember what I said at the beginning of the sentence, so there's little things structurally wiggling around in your brain and changing synaptic connectivity online all the time or you wouldn't be able to think, let alone remember. Absolutely.
So the question here is more specific. It's not whether the brain changes with experience. Absolutely, it does. It's whether experience can explain these particular cell activities and where they came from. I'm glad you asked that question.
OK. OK, so now, we've just argued that the selectivity of that little dot, at least, must be due to experience. Doesn't tell us about the others, but tells us that one must be. And now we're asking, can its selectivity-- can that location be determined by the connectivity of that region? So to get to that, we use diffusion tractography. And the hypothesis here is that it's these long-range connections that determine where those functional regions land. This is me with a bunch of functional regions in my head. Doesn't matter which ones. We're just asking the general question.
And so I'm going to skip over all the details, but just give you the gist of a recent paper that we published looking at this. We asked-- we found the visual word form area. That's right down in there, about there, left hemisphere. And we scanned kids at age eight and age five, same kids. Age five, then age eight.
Here's the age eight data. These kids have learned to read in between the two scans. And here is the response of their visual word form area to words, faces, objects, and scrambled objects. Nice and selective, just like a good visual word form area should respond. So it's there by age eight.
What we then do is we take the data in the same kid across those three years, align the data, and say, what were those voxels doing in that kid at age five before they learned to read? This is another way of showing that it's experience that was necessary. And boom, they were not word selective. They shouldn't be. These kids hadn't learned to read yet. But it's still kind of nice to be able to show that. All right?
But now, the hypothesis is that it's the connectivity at age five that predicts where this region is going to land. So we use that same rigmarole that I showed you earlier for adults, where we used just diffusion data to predict where the functional region will arise. But we use the diffusion data from five-year-olds to predict where that region would arise when the kids were eight. And it turns out you can do that. You can predict actually fine-grained individual differences in exactly where the visual word form area will arise at age eight from that same kid's connectivity at age five.
So does everybody see how that fits one of the necessary conditions for this idea that the locations where these things land later in development is determined by connectivity that exists before? Now, our study was done in humans, so we didn't have a causal test. All we can say is it was there before, and it's sufficient. But we don't know if that's actually how it worked. That's how it is working on humans.
But if you put it together with the ferret data, it's pretty suggestive. All right? Yeah.
AUDIENCE: Where is it connected to?
NANCY KANWISHER: Ah. Very good question. I'm being very vague, connectivity. This is a long, complicated issue. Most likely, it's connected to language-y areas, which we'll talk about in a month or so, that are out on the lateral surface and up in the frontal lobe. There are papers claiming that it's connected to language-y areas. But I'm kind of a methodological hard ass, and I don't quite believe those data. I mean, I think they have a medium case, but they haven't nailed it. I've tried to nail it. It's hard for all of the reasons that this method that I was complaining about-- I'm complaining about it because I'm bitter about it. I want this method to be better. I want to know what those actual structural connections are.
I wish we could put a seed in the visual word form area and follow those tracks and say not just there's enough of a fingerprint that we can predict its function, but here are the exact connections. And it's, mm, not quite up to that task, in my view. It's a big bummer. I've wasted a lot of the last year trying to get that method to work, and I haven't quite given up yet, but I'm close. It's OK. It's just not good enough to answer those questions, which is very frustrating because they're pressing questions. Yeah.
AUDIENCE: Can I ask one more question?
NANCY KANWISHER: Yeah.
AUDIENCE: So people who are blind shouldn't have this region active.
NANCY KANWISHER: Ooh, very interesting question. What do you think? People who are blind read. What do you think?
AUDIENCE: So the connection between here and the visual system for the blind people goes from that region and touching, since they're-- I don't know.
NANCY KANWISHER: Yeah. Yeah, it's not obvious. It's not obvious. There are several papers-- which I was going to put in this lecture and I just couldn't fit. But there are several papers that argue that tactile Braille reading in congenitally blind people activates that same region. They're pretty good papers. I sort of believe it. I have-- as I say, I'm a little bit of a hard ass, so I'm not 100% convinced, but they're pretty compelling, and it's a very interesting question. And it's a whole saga. It's so interesting. I'm going to try to incorporate more of this in a later lecture, because I didn't fit it in here. Yeah.
And the idea would be, if you had to guess, what will those connections be that drive that? Certainly not visual input. They're not getting visual input. So it would have to be input from language-y regions or something like that, that would also be present in blind people. See what I mean? OK.
All right. Anyway, all of this just to say that it looks like the visual word form area is kind of special in the human brain because, one, it shows us that at least one region gets its selectivity from experience, and two, because it develops later, it gave us this opportunity to ask if the connectivity was present before the function as a sort of weak test of this hypothesis that connectivity determines function.
All right. Boom. All right, so where are we? This really is a shaggy dog story lecture. OK. So we started off by saying a lot of the basic structure of the brain is innate. Most of the neurons in your brain, you had at birth. Most of the long-range connections were present at birth. They weren't yet myelinated, but they were there.
We've argued that some of these selective cortical regions appear to depend on experience. For example, the face-deprived monkeys don't have face patches. And the ferrets see the response of an auditory cortex when their auditory cortex has been rewired to get visual input. And further, I've argued that the visual word form area, the selectivity of that region can't be innate, and yet it arises at a consistent location, possibly because of these long-range connections of that region.
So all of this looks very experiential, aside from the structural stuff that's present at birth. So is Kant toast? I started last lecture as saying he was reacting against the empiricist, saying not everything is derived from experience. We need to have a priori conditions of cognition. Remember, he said, "space can be given prior to all actual perceptions, and so exist in the mind a priori. And it can contain, prior to all experience, principles which determine the relations of these objects." So he's basically saying we have an innate representation of space. And I've just been giving you all this evidence for all the other cases that experience seems to be playing the major role.
So is it all over for Kant? Well, actually, Kant was talking about space and time primarily, and we haven't considered that yet. So let's get back to space. Remember these spatial representations that I talked about in the rodent brain. Four different kinds of neurons that are present in adult rodents that play wonderfully different roles in navigation. Remember, there are place cells that fire only when the rodent is in a given known place in his environment. There are direction cells that fire only when the rodent is oriented in a given direction in his environment. There are border cells that fire only when the rodent is near a border of the space he's in, like right now, I have cells that are firing because I'm next to this border of this space that I'm in, and Anna does not have any of those cells firing because she's in the middle of this space. And there are grid cells that have this amazing property of firing in a hexagonal array of little micro place cells spaced evenly in a hexagonal array.
OK, so all of this apparatus that I talked about last time that seems to be playing a role in your concept of where you are, where you're oriented, and the space around you, if we had to take some representation of space that Kant might have been talking about, this would be it. So is this stuff innate?
Well, happily, all this work was done originally in rodents. All the most detailed work was done in rodents, so we can ask that question, because it's an animal. OK? All right.
So what the Mosers and their colleagues-- the husband-wife team who got the Nobel Prize in 2014 for their work on the grid cells-- and O'Keefe and their colleagues in London, who discovered place cells in the first place-- two different groups simultaneously realized what a huge, big, fabulous question this was, and they both did the experiment at the same time, and they published it together at the same time about four years ago in-- I forget-- Science or Nature. Big event in the field. So they both realize the same thing.
The way rodents grow up, they hang out in a dark nest. They're very premature at birth, and they can't really do much. They can't move around. All they can do is turn their head toward a nipple and suck milk. That's kind of it.
And so there they are, in the nest, in the dark. Their eyes don't even open until the end of the second week of life. And at the same time, it's the first time they emerge from the nest, and the first time they have any experience navigating, any real experience of space. And so we can ask which of those cells are present, the very first experience. And it turns out that-- sorry, this is a little hard to see. There's a light yellow overlay. This is the window when they first open their eyes and leave the nest, between postnatal day 12 and 14, the end of the second week of life.
And what you see is the head direction cells are present immediately, as soon as the-- can first collect neurophysiology data from these newborn rat pups. They're there right away. Place cells, you can get them pretty early, and grid cells soon after that.
So this suggests that in the rodents, at least, their representation of space as entailed in the properties of these neurons is largely innate. So just like Kant said way back in the 1700s. Everybody get this? It's pretty cool. It's a rare opportunity where you can just take a huge, big philosophical question and, boom, answer it with data. Yeah. Awesome.
OK. Yes.
AUDIENCE: Wait, sorry--
NANCY KANWISHER: I'm sorry, is it Martin? Yeah.
AUDIENCE: Sorry, are you saying that it's innate or that it's learned?
NANCY KANWISHER: Innate. Innate.
AUDIENCE: --takes time--
NANCY KANWISHER: Because-- oh, yeah. OK, important point. OK, we don't know before then whether they existed. They were in the nest. You can't really do neurophysiology on the rodents in the nest. The point is, none of the relevant experience has happened before then. They haven't opened their eyes, they haven't navigated. So none of the experience that could be relevant for navigation has happened before right here, on the very first time that you can test it, and the very first time that they could possibly be in the world, seeing the world, navigating, they have them.
But what you point to is an important point. I mentioned this briefly last time, but it's really worth repeating. Innate-- I guess the word "innate" can be used different ways, but what I mean by innate here, the relevant part of innate, the content to the big questions, is whether it's specified at birth, not whether it exists at birth.
Remember, I gave the case of puberty. Puberty happens way after birth, but it's not the result of experience. It's part of a genetic program. It's just going to happen. I mean, I guess if you don't eat anything, you'll die and then it won't happen, but within broad latitude, it's not the result of experience.
And so you can have maturation on a biological autopilot that continues independent of experience, and that's the relevant kind of innate. I realize I was probably confusing. Innate for this purpose doesn't mean present at birth. It means determined at birth, essentially, independent of experience. Good. You guys are asking good questions and it's helping me be clearer. OK.
OK, so that's cool. That says that those cells are all present very early on, and presumably independent of experience. What about re-orientation? Remember, re-orientation is this cool thing that I carried on for a long time about because it's so interesting. Reorientation is this particular aspect of the navigation system. It's been studied behaviorally in rodents, in young humans, and human adults. And lots of other animals, actually.
And the key thing about reorientation is this is how an animal gets their bearing when they're disoriented. And the key finding is they use the shape of space around them. They don't use landmarks to reorient themselves. That's the key finding. This is all stuff I talked about before.
And the evidence that animals use the shape of space to reorient is, when you have shown a rodent that there's goodies in that corner, the left side of the short wall, essentially, and then you disorient him and put him back in the box, he goes 50/50 to those two corners, showing that he's learned something like the food is on the left side of the short wall. Not in words, presumably, but some mental language that holds that information.
OK, so that's using the shape of space for reorientation. Is that ability to use the shape of space-- this is a different sense of space than head direction cells, the shape of space around you-- is that present independent of experience? Well, again, we can't test that in humans because we can't deprive humans of experiencing the shape of space around them. Was there a question? No? OK, all right.
But we can test it in animals with something called controlled rearing that I've talked about before. So again, we can't test this-- even in animals, it's hard to test at birth. Lots of animals can't navigate very well at birth, right? So we want to test them after birth, but we don't want them to have the relevant experience, because that's what we're asking, is would this ability be there even without the relevant experience.
OK. So the answer to all of this, the way around this is to use controlled rearing. Just like Sugita did with the face-deprived monkeys, and just like our Carl also did with face-deprived monkeys-- the behavioral study and the functional MRI study. But this will be a controlled rearing study in a different organism, and it's pretty cute. It goes like this. This is a group in Italy that has a whole lab that uses this paradigm, and it's very, very powerful. So what they do is they-- again, I just said this. The whole idea is raise an animal without the relevant experience, figure out if the ability arises anyway.
So in this case, what they do is they get fertilized eggs, chicken eggs from a local hatchery that's conveniently near their lab. They bring those fertilized eggs into the lab and put them in an incubator, and they hatch them in darkness. Then for the first few days, you get a nice little chicken. It's in the light here, but that's just so you can see it. It actually hatches in the darkness, so there's no visual experience.
Then you put them in cages of different shapes. Either a nice rectangular shape like this that would be relevant for reorienting, or a circular space like that that has no geometric cues because it's symmetrical. So they spend their first three days of life in one or the other of those containers.
You then, in order to get a behavioral result out of them, you have to use their natural behavior, which is that they imprint on mama bird. And you may know that imprinting is pretty non-specific. Baby birds will imprint on nearly anything that moves. So they take a big, red plastic object, and they dangle it in the middle of the cage, and little chicks follow the red object. That's mom. That's what they do.
So then you can use that behavior to test their ability. And so you get them in the groove. You show them mom, and mom disappears behind an occluder. And then you let the chick go follow mom, which the chick wants to do.
So they do a few trials like that. They've imprinted. They're going to follow mom. This gives us a way to ask the chick, where do you think mom is? And that gives us a way to ask, what cues are you using to reorient, even though you've been raised without geometric information.
All right. And the thing I really love about this-- oh, I guess it's on a later slide-- is that after you do the whole experiment, you take one or two trials on that chick, you're done with that chick, they have the relevant experience, you give them back to the hatchery and the hatchery does their thing. So it's just like a really nice little symbiotic science-farming enterprise.
OK, so here's actually what they do. So here's how the re-orientation test goes. After this chick is raised in one of those two environments-- the circular one with no geometric information, or the rectangular one with geometric information-- and they've learned to follow big red plastic mom, you then put the chick in this box here. The chick is in there in this wire mesh that holds them in there so he can't run around. He's in this rectangular space, and there are four symmetrical occluders in the corner.
You then take the red object-- mom-- and hide it behind one of the blue panels in full view of the chick. So now the chick knows where mom is. Now you bring down an opaque cylinder around where the chick is. And while the opaque cylinder is down, you rotate the box 90 degrees. So now, the chick has no way to tell things are rotated, I'm disoriented, what's what, how do I know where to go. So this is reorientation in a newly-hatched chick that's been reared under controlled conditions.
All right. So now, once you rotate the box, then you lift up the opaque occluder, and the cage, and you see where the chick goes. Everybody get this? It's a little bit convoluted. But it's just a version-- it's a chick version of the same reorientation task we've been talking about all along. You do 16 trials, and then you give the chick back to the hatchery.
OK. So here's what happens for chicks that are raised in that rectangular cage. They have geometric experience during those first three days of life. So this is kind of a control case. And what you find is that when you've hid mom in a corner that is on the right side of the short wall, they go preferentially to the two corners consistent with that more than the other two corners, consistent with the idea that they can use geometric information to reorient themselves. They're not perfect, but they're way better than chance. Does that make sense? They go to the two corners that are consistent, showing that they can use the geometric information.
But these are the chicks that were raised with the geometric experience. What about the chicks raised in the cylinder, without geometric experience? They do the same thing. And this is the first time they've experienced-- this testing condition is the first time they've experienced any space that isn't symmetrical, any place where they could possibly use geometric information to orient, and they do it on the first trials. Everybody got that? So that tells us that this ability to reorient based on the shape of space when you're disoriented doesn't require experience with the geometry of space.
Now, you might be thinking, well, that cylindrical cage, it doesn't have something to break the symmetry, but there's still something geometric. There's a floor, there's a wall. I agree, that bugged me too. They did another experiment in which they raised the chicks in total darkness. First three days, no visual experience at all, and the chicks still do that. So no visual experience. That's an even stronger case. Was there a question percolating in here? I felt like-- no, OK.
All right. So yes, the reorientation system-- actually, that's not well expressed. The ability to use geometry to reorient is not based on any experience with geometry. It must be innate in the sense of not requiring experience. So go Kant.
All right. So where have we gotten to? Let's recap. What's innate? OK, in the face system-- I went through this before, maybe not that much. We could quibble some of the cases are ambiguous, but the main evidence suggests that-- before you posit that something's innate, it's like the evidence-- you have to have strong evidence for innateness to argue with. The default case is not innate, right? It's kind of an extreme claim, and so the default is not innate, and so right now, we don't have a strong argument that any of the face system is innate other than this bias to look more at faces, which as I said might be a very rudimentary template.
OK. I talked about the role of connectivity and cortical development. Most of those long-range connections are present at birth. I showed that connectivity can causally affect development in the case of the rewired ferrets. I showed that category selective regions in human adults have distinctive connectivity. And I showed that in the visual word form area, the distinctive connectivity is present before the function.
OK. So that tells us that there's one region in the brain that we know the selectivity of that region can't be innate. It doesn't tell us about all the others. Who knows? It's kind of an existence proof. They might all be learned by experience. We look at faces a lot. We look at scenes a lot. We look at bodies a lot. Maybe they all have the same experiential basis. Doesn't prove it. It just says maybe.
All right. But then I showed that for the space system, actually, we do have pretty strong evidence that a lot of it is innate, both in that the head direction cells are present before any visual experience or any navigation. And I showed that the chicks can reorient based on the geometry of space, even if they've never seen space or geometry before.
So bottom line, face system, who knows, but no strong evidence for innateness. Visual word form area, strong evidence that it's experientially based, and space system, strong evidence that a lot of it is innate.
OK. All right. I got us to here. All right. Now, all of this time, I've been talking about, how do we wire up this system and its cognitive correlates in development? What do you have to build in to get a system like this in development? What can you get through learning? What do you have to build in, and so forth.
But it's a related but different question to ask, is that the only possible way it could work, or are there situations where we might have a very different kind of organization of the brain? Are there other possible organizations that might develop under different circumstances that would still work? And the two relevant cases that people have looked at are cases of brain damage. So if you have brain damage in adulthood, and you lose a little piece, can that piece move over and reorganize? Is there another possible organization that would work?
Or what about if you have very, very different visual experience, like you're born blind. Then do you get the same organization, or does everything go haywire and you have a totally different kind of brain organization? All right, so I'll give you a little bit of data on each of those questions.
All right. So first of all, can the brain reorganize after brain damage? The main domain where people have studied this-- which we haven't talked about yet, but we will in a month-- is the case of language. So it's just something there are lots of studies of this. People have been onto this question for a long time. In fact, Broca wrote about this question 200 years ago.
So the basic findings are that if you have damage to your language parts of your brain in adulthood, that is not good. Often, you'll recover a little bit of function, but you really won't get it back. It's just a big massive drag. There are people we will talk about in a month when we get to the language section who have had massive left hemisphere strokes that basically take out their entire language system. And it doesn't come back years after that stroke.
We'll see, actually, that they're cognitively pretty normal in every other respect. It's quite amazing how much they can do without language, which is fascinating. But for present purposes, the main finding is brain damage in adulthood that takes out language functions, not good. Not much recovery, not much reorganization.
By the way, there's a whole-- it's very trendy in popular media to talk about, oh, the brain is plastic, you can rewire your brain, take this-- use this smartphone app and rewire your brain. Mostly, that stuff is just bullshit. You can learn a task, and you can get better at that task, no question. But you can't make yourself smarter. You can't rewire your whole brain. That's garbage.
All right. Back to aphasia. OK. The story is very different for brain damage in kids. If you have brain damage in the first few months of life to language parts of the brain, as an adult, your language function is pretty good. It's not quite perfect. Took people a while to discover that it isn't quite perfect, but it's surprisingly good. For everyday uses, you might not even notice. You have to test people on esoteric syntactic things to discover that, actually, it's not quite right. But it's very good.
And typically, what you see, if you scan these kids, is that a lot of language function has reorganized and shifted over to homologous regions in the right hemisphere. OK, so that's better news. After age five, if you have brain damage, not so good. So it's like there's some critical period for when the brain is plastic. You can move language over to the right hemisphere up until around age five, and after that, you can't really.
All right, so these consider-- right, that's what I just said. So these considerations have been pulled together under something called the Kennard Principle. And the Kennard Principle basically says, if you're going to have brain damage, have it early. Better not to have the brain damage, but if you have to have it, have it early. And that's based on findings like this-- the fact that the kids who have left hemisphere damage have much better language function as adults than adults who have the same kind of left hemisphere damage.
OK, so that's a reasonable summary of the language literature. However, this finding doesn't always hold. And it has led others to put forth the Hebb Principle, which is sort of the opposite. The idea of the Hebb Principle is that, first of all, it depends. It depends on where the damage is. It depends on when you test after brain damage. But the key insight that will make this seem more sensible-- at first, you feel like it's very intuitive. Kids are more plastic in all kinds of ways, right? Watch me using a computer, it drives my students insane, I'm so slow. One of my students once-- back when I used to actually scan subjects, one of my students was watching me scan, and he's just getting more and more impatient, and he finally is like, it's like watching my mother. It's just like, you cannot become as fluent at things when you start doing it when you're 50. It's just what it is. We've all seen that manifest in various ways.
OK, so that's generally true, and that's consistent with this Kennard principles that you have more flexibility when you're younger than older, which is also why you guys should learn lots of math and computer science now while your brains are still good at it. Don't wait until you're 40 when it's harder. You will need it. No matter what field you are in, you will need it, so do all of that now.
OK. But to get back to the topic at hand, what is the idea behind the Hebb principle? The idea is, think about building a house. You can't build the first floor if you haven't built the foundation. Similarly, you might imagine that there are lots of aspects of cognition that are necessary precursors for other aspects of cognition. And if you're wiring up a whole brain, you're not going to develop those second order ones if you don't get the first order ones. And so if you have damage early in life, you may have bigger long-term consequences.
Really concrete kind of silly example. Suppose you have damage to primary auditory cortex at birth, and you're deaf. Well, you're going to have a harder time learning language because you need to hear to get language. I mean, if you have smart parents, they'll teach you sign language, you'll be OK. But this is a necessary prior condition.
And so more generally, it turns out that in a lot of domains, some aspects of brain and cognition are necessary precursors for others, and in those cases, the Kennard Principle doesn't hold. OK? Blah, blah, blah.
OK, now let's get-- this is all sort of in-principle vague stuff. OK, what about visual cortex? What about all this stuff we've been talking about here? All of these specialized regions for different features and different categories, and you may notice I've now added visually-presented words on there. Remember, visually-presented, not auditorily. Auditory is a whole different thing. This is seeing words and letters.
OK, so all of this organization, can this stuff move around? If you lose this thing, can you regrow it over there? Well, not really. As I've been talking about, if you have brain damage in adulthood, you basically lose the corresponding mental function. That's why we have all these neuropsychological syndromes. If people could relearn and just move the function over, you wouldn't have a syndrome. You might have a transient problem as you relearned.
But in fact, if people get achromatopsia-- can't see color vision-- they're not going to get better, or not much. Agnosia, if they can't see shape, they're not going to get better. Akinetopsia, they can't see motion after a stroke in adulthood. They're not going to get better. Prosopagnosia, topographic disorientation, and alexia-- inability to read due to a stroke-- basically, people don't really recover from these things.
There's a beautiful recent article by a German neuroscientist who had a stroke and couldn't read at-- I don't know-- age 50, 60, something like that. And so made himself an experimental subject, and was just determined to relearn to read. And he did every possible thing, and he's written about this very interestingly, and there's an article I can put on the website if anybody wants to read it.
He basically retaught himself to read, but he's doing it in completely different ways from what all of you are doing. He doesn't have that bit. He didn't develop a new one of those. He developed a very different compensatory strategy that's very slow and doesn't work anywhere near as well as reading does for any of us.
So basically, in adulthood, these things can't move around. So now, are we talking Kennard or are we talking Hebb? What happens if you get the damage in childhood? Well, I'm raising this question because I think it's big, and deep, and interesting, but there basically isn't much of an answer to it.
It's hard to answer. I'll give you just a shred of data, but basically, I think we don't know the answer, and I'm dying to know the answer. I'll give you just the one paper that I know of that's relevant to this. This is a study from quite a while ago. It's the case of a patient who's known in the literature as Adam. And Adam sustained bilateral damage to his ventral visual pathway, both sides, at day one of age due to a stroke. Actually, strokes around birth are surprisingly common, like this happens. So this guy basically lost cortex in a lot of the regions that we've been talking about on the bottom of the brain that do high-level vision.
OK, so he was tested for this study at age 16. Now, his visual acuity, his ability to see fine-grained stuff is not great, and his object recognition is not perfect, but it's not terrible either. He can recognize common objects from photographs and line drawings reasonably well. So he has some residual vision. But he can't recognize faces at all.
So he is a fan of this TV series called Baywatch, which I don't know about. I don't know if that's like-- anyway, this study was done a long time ago. Anyway, some beach TV series that has the same set of characters, and he was obsessed with this, and he watched it for an hour every day for a year and a half. And that's just relevant because we know that he has lots of experience looking at these individuals. But when tested in the lab on pictures from Baywatch, he couldn't recognize any of the major protagonists. That's just a measure of how severely prosopagnosic he was.
So that suggests that when the relevant parts of the brain, that the relevant parts are already specified at birth, and if you lose those parts, you can't just put that function somewhere else. So that suggests-- I'm not leaning too hard on this because there's just very little data. This is the best there is. So it suggests that those-- at least the general region is already specified.
Can anybody think about why that might be? Why can't you just train up some other part of cortex? Say, his object recognition is pretty good. Why can't you train that part of the object recognition system and just say, OK, learn to do faces? Nobody knows the answer to this. Yes.
AUDIENCE: I don't know about the [INAUDIBLE] it's gone completely, just maybe because throughout time very far back in evolution, it's a face region.
NANCY KANWISHER: Yeah. Yes, but still-- yeah, I mean, it's clear that we have it, and we probably have it for some reason and all of that. But why couldn't you just grow a new one over in a different part of cortex? What's wrong with that other bit of cortex? What might it not have that you might need. [INAUDIBLE]?
AUDIENCE: The right connection?
NANCY KANWISHER: Yes! I just showed you guys that there are very distinctive connections. This is all speculation. Nobody knows why. I'm just saying that one guess is that the reason these things can't just take up residence someplace else is they need those particular connections to get the right input to process.
OK, anyway, this is going way beyond the data. But in principle, people could get more data of this kind and answer this question. If I can find the relevant subjects, I'm aiming to do this.
OK, so let's take one other case. Very different kind of change to ask, what happens-- so basically, bottom line of all of this is, stuff doesn't move around that much. Early brain damage to language regions, they can shift to the homologous regions in the right hemisphere. But all the other data that I know of suggests you can't just take anything and move it around a few centimeters over. At least if you have the damage in adulthood, and maybe even if you have it pretty early.
OK, all right. So now we're going to say, OK, might this organization nonetheless be very different if you had very different experience? So let's take the case of congenital blindness.
OK, so how is the brain organized in congenital blindness? Well, let's take V1. Here's this big chunk of cortex back here, nice big chunk of cortex that, in all of you guys, does vision. What does it do in congenitally blind people? Does it just sit there? Do the cells die out? Do they just go dum-dee-dum-dee-dum and they don't do anything? It's a lot of cortex to waste on all of that.
Well, it turns out, astonishingly, that what visual cortex does in blind people is a whole bunch of other things, including, astonishingly, language. So you present a sentence to subjects through Braille or auditorily to blind subjects in the scanner, and you see activation of V1.
Further, you might think, well, OK, whatever. Just turns on, it has nothing to do with anything. But TMS studies-- V1 is right near the surface of the brain. You can zap that region and ask if you're disrupting function, and you can interfere with language task by zapping V1 in congenitally blind people. So it's not just activated. It's doing causal work in blind people. This is mind-blowing. This is like a totally different patch of cortex.
So yeah, it's hard to think of more different functions than low-level vision and high-level abstract language processing. So that suggests radical possible reorganization, in this case, with different experience.
OK, what about those regions on the bottom surface of the brain? The face, place, word, and body regions that we've been talking about for so long. What do they do in blind people? Somebody already asked me before, maybe [INAUDIBLE]. Somebody over there. It's my spatial code.
And there's a lot of claims that they have similar selectivity, which I'm not totally sure of, but let me show you one piece of data. I promised you that there were going to be further contradictions in the whole saga of the role of experience in wiring up these regions, so here's one more contradictory piece of data.
OK, this is a paper that was published just a few months ago, and the title of the paper is that the development of visual category selectivity-- that means face place body regions, all that stuff-- in the ventral visual cortex does not require visual experience. OK. What? What, what, what?
OK, here's what they did. They scanned-- pretty crazy experiment-- they scanned congenitally blind subjects while they heard sounds that were associated with faces, bodies, objects, and scenes. So for example, they might hear laughing, chewing, blowing a kiss, whistling sounds. Those are face-related sounds. Or they might hear scratching, hand-clapping, finger-snapping, bare footsteps, knuckle cracking. Those are body-related sounds, et cetera.
So they're lying in the scanner hearing these sounds. Probably cracking up. Now the question is, do we see face, place, body, and object regions activated from sounds in congenitally blind people listening to those categories of sounds?
And the crazy answer is, kind of sort of a little bit. It's not super strong. The data are not mind-blowing, but let me just show you what we have. OK, this is the bottom of the brain, back of the brain. Everybody oriented here? OK. Occipital lobe. This is where all the good stuff is that we've been talking about. OK.
So this is now the sighted control subjects looking at visual stimuli. So this is a significant map, P levels. And so what you see is facial activity in red, object selectivity in green, scene selectivity in blue, purple, whatever that is-- blue. So that should look sort of familiar. Faces, lateral, scenes, medial. Objects, people debate about. I haven't talked about it much, because-- anyway, faces and scenes, so stuff to pay attention to.
OK. And over here-- this map is the same. It just says, never mind if that voxel reaches statistical significance. Just plot what category that voxel responds most to. So you just see a big swath.
All right. Now, what do we see for sighted controls listening to the auditory stimuli? Not much reaches significance. If you drop the threshold way down and look at this, maybe a little bit. These are somewhat correlated, but it's lousy. So sighted subjects listening to those sounds, not much. What do you think happens with blind subjects listening to those sounds? Well, you get face selectivity here that's statistically significant. And if you drop the threshold and look at the overall map, you see a resemblance of this map to the sighted map, the visual map in the sighted subjects, and this correlation is highly significant.
So this is totally weird. It says, yes, there's a similar spatial layout on the brain of these same selectivities in congenitally blind subjects who never saw those stimuli. And that's the basis of their argument, that the development of visually category selectivity doesn't require experience.
But now you may be thinking, what about that paper on face-deprived monkeys? The title of which is, "Seeing faces is necessary for face-domain formation," namely for face patches. So these two findings, these two claims in the titles are completely contradictory.
So we're out of time. Nobody knows the answer to this. It's an ongoing puzzle. There are all kinds of possibilities. They're different species, they're different kinds of tests. There are many things you could say, but we're really right on the horn of a big conundrum in the field. And all I have to say is welcome to the cutting edge. It's a mess there. OK, thank you.