Lecture 10: Development, Nature & Nurture I

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Summary: This lecture examines how we think the cortex organizes in the brain over infancy and childhood, and the function of genes vs experience.

Speaker: Nancy Kanwisher

[DIGITAL EFFECTS]

NANCY KANWISHER: So let's start with one of the deepest questions humans have ever asked themselves. We're not messing around in this class; we're going for it. And one of the deepest questions is, where does knowledge come from?

And as you'll know, if you've taken even a teeny bit of philosophy or read even a teeny bit, you know that some of the classic views in Western philosophy-- especially the empiricists, Locke and Hume-- argue that all knowledge comes from experience, right?

On the other hand, there are a number of other schools of thought in Western philosophy, of which a dominant figure is Immanuel Kant, who argued that experience alone is not enough. You can't just have experience and figure out all the stuff we have figured out.

And so he argued that there has to be what he called "a priori conditions" of cognition, which can't be derived from experience themselves, but have to be given prior to it, OK? So you have to have to build some structure into a mind or brain to get it off the ground. You can't just start with absolutely nothing and get anywhere.

OK, and he also argued that one of the key elements of this a priori structure that you have to build in was space and time-- organizing principles of cognition and thinking. And so in his version of it, space is nothing but the form of all appearances of outer sense, and it can be given prior to all actual perceptions and so exist in the mind a priori, and can contain, prior to all experience, principles which determine the relations of these objects.

OK, well, is that just empty philosophical hot air? It's kind of hard to understand exactly what he means. You're actually have to go spend a good deal of time with reading him to make any sense of it-- or cheat and get your friends to tell you, as I do. But no, I'll argue it's not just empty philosophical hot air-- that these are, in some important sense, empirical questions. And there are empirical questions that our field addresses very directly.

And so on Wednesday, we'll talk about whether your representations of space in your head are innate or not. It's pretty much directly what Kant is talking about-- or the modern version of what he was talking about. And today, we'll talk about which aspects of the brain are innate and which our learned, OK? That's the agenda.

OK, so this little kind of Easter egg brain here very schematically shows you some of the regions that we've been talking about in this class so far, with regions that are, to varying degrees, specialized for processing things like shape, and color, and motion, and faces, and places, and bodies-- visually processing all of these things in approximately those locations. And as I've mentioned, these regions are present in approximately the same location-- with some individual variability-- in pretty much every normal person.

One of my lab members says, you keep saying that, and it's just not true. There's some percent of subjects who just don't show these things. He's kind of right, OK. So maybe, I don't know, 5%, 10% of subjects, you wouldn't see some of these things. And we've never actually done the serious work of bringing those subjects back, scanning the hell out of them, and finding out whether they were just asleep in the scanner or it was a bad scanner day, or whatever it was. I bet they all have them, and it's just sometimes you don't see it, but I'm trying to be a little more honest.

OK, but you just look at this. Given this very schematic version of it, you say, how would you build this system? How would you start with an embryo and build into a genome, or build into whatever experience is going to happen to this developing organism? How would it end up with this very particular structure, with those things in approximately the same place-- or at least the same relative positions-- in all subjects?

The face bits are always lateral to the color bits. The place bits are medial to the color bits. The shape bits are out on the lateral surface. It's like always like that. How do you build a system like that?

I find it hard not to immediately think, well, some aspect of this must be innate, or how would it be so damn similar in each individual, right? But it's not the only hypothesis. Some big part of it-- even if some aspect of this is innate, some big part of it may also be learned or derived from experience, OK?

So what do you guys think? Do you think the fact that these structures are in systematically the same place across subjects mean you have to build in all that stuff, somehow figure out how to get a bunch of As and Ts and Gs and Cs in your DNA to give you a blueprint for how to build that structure? What do you think? Yeah?

AUDIENCE: I mean, it's a combination, but it's hard to, then, think about how that's involved in [INAUDIBLE] generation and then kind of become more innate?

NANCY KANWISHER: Yeah, so to some extent, experience-- what I mean here is learn from experience within each individual. You could argue that "innate" really means "learned through the experience of our ancestors, and hence wired into the DNA," yeah. Anyway, I find this not an obvious question, and so we'll talk about what the data say here.

So first of all, we're going to do some very basic facts about brain development, just to get the picture of what we're talking about physically with the development of brains. So we can ask, what is present at birth? And so it turns out that most of the neurons in the adult brain are generated before birth, OK? So most of the actual neurons are generated early. You're not making a whole lot more after birth-- a few, but not a lot.

Further, the current view is that most of the long-range connections-- that means like a connection between this part and that part of the brain-- are also present at birth, OK? Nonetheless, even though a lot of stuff is present at birth, a lot of stuff changes in the first couple of years of life.

Most obviously, the brain doubles in volume in the first year, from a two-week-old, to a one-year-old, to a two-year-old. The cortical thickness-- you can see here the dark stuff, which is the gray matter out there-- increases sharply between years one and two.

But also, the complexity of each individual neuron increases dramatically in the first few years of life. So here's a schematic picture of a piece of gray matter here. We have some number of neurons here with a few little processes and a few connections. And over the first couple of years of life, those connections get much more dense, and the neurons get much more complex.

OK, and the final thing that really matters early on in development is that myelination happens rapidly in the first few years. And remember, myelin-- this is a little reminder-- neuron with that yellow stuff, which is a bunch of cells that wrap around the axons, the long processes of a neuron. And that myelin sheath builds up a lot over the first couple of years. And that's important, because the myelin sheath enables those neurons to send their signals faster down their axons, OK?

OK, and this is just a picture of different-- of a vertical slice like this through the anatomy of infants of different ages, from 107 days up to about a year. And the colored stuff in the middle is degree of myelination, which you can see with various kinds of anatomical scans. You can see it starts at 107 days with a tiny little bit in the middle, and it gets more and more myelinated and moves from center to periphery over the first year of life. So all those fiber pathways are getting accelerated as they get wrapped with myelin and hence sped up.

OK, all right, so bottom line is most neurons and long-range connections are in place at birth, but development continues rapidly in the first two years, especially increasing complexity of neurons and synapses and myelination of long-range connections and white matter, OK? So it's just basic anatomy, nothing functional yet.

OK, now we're going to consider in some detail the case of face perception, not really because that's what I work on-- or used to work on, mostly-- but just because there's a very rich set of data where people have grappled with this question in the case of face perception. Next time, we'll talk about the navigation network and reorientation-- what parts of that system might be innate and learned.

So I'll just say right out of the beginning that this is an extremely active area, where every time I turn around, another paper comes out that contradicts a previously-published finding. And so that makes it fun, but it means there isn't going to be some really tight, perfect story here. And I'd rather take you guys straight to the cutting edge, even though it's kind of a mess, than give you a nicely packaged but surely wrong picture, OK?

Because again, I think what matters most in this area is how do you go about answering these questions, rather than what is the current state of the thoughts about the answers.

OK, so how are we going to think about, how does face perception develop? Well just to get started, I'm going to show you a very brief movie of a 72-hour-old monkey, and see what you think. He's sleepy.

He's pretty interested in that face. And watch now. Hmm. [LAUGHS] Pretty cute, huh?

So what do you think? What does this tell us about face perception? Yeah?

AUDIENCE: Did they try just moving anything in front of him?

NANCY KANWISHER: Good question. Good for you. Quily, is that right?

AUDIENCE: "Quile-y"

NANCY KANWISHER: "Quile-y", all right. Yes, so Quiley asked, did they try moving just anything in front of him? Absolutely the right question. So that monkey seems pretty interested in that face, but a face is a moving thing. Motion is very salient to young primates-- humans, and monkeys, and many others, absolutely. What else did you see in here? Yeah.

AUDIENCE: It started imitating [INAUDIBLE].

NANCY KANWISHER: Yeah, kind of. I mean, the person-- the adult human there-- was moving their mouth open like this, and the monkey was doing something with their mouth. So what would that require? Sorry?

AUDIENCE: I like, I have another. Also, was the monkey allowed to touch its face before this?

NANCY KANWISHER: Yeah, good question. Good question. 72 hours is damned early, but it's not zero experience, right? So who knows what they've managed to pick up that early. There are actually studies in humans, which I'm hoping Heather knows better than me. Those Andy Melzoff things. How young are those humans? Those are like first hour.

AUDIENCE: Yeah, [INAUDIBLE].

NANCY KANWISHER: I think it's a--

AUDIENCE: [INAUDIBLE]

NANCY KANWISHER: So there are studies in humans where you can show versions of that, with newborn infants copying-- the experimenter comes up and sticks their tongue out at the infant, and the infant does that back, kinda sorta. Certainly within the first two days, maybe even earlier, OK? OK, so it's very suggestive. It's tantalizing, but we need controlled conditions. It doesn't tell us everything we need to know.

OK, so if we think about it, there are ends of the hypothesis space about how all of this could go. As Alana mentioned, everything is both genes and experience. That's true, but there are very, very importantly different ways in which genes and experience can act together-- some in which a big part of the heft of what the adult form has might be built in, and other stories where most of the structure comes from experience. So just because everything is both doesn't mean we shouldn't flesh out exactly what comes from what.

So on one end of the spectrum, you might imagine that there's some very, very rudimentary precursor that has to be built in, plus a learning mechanism, OK? Or a bunch of rudimentary precursors, which are just there to get the system to learn in the right way, OK? And so we'll talk shortly about the idea that there might be some kind of innate template for faces that gets monkeys and humans to look at faces. And then, the idea is once you get them to look at a face, then experience can take over from there and do the rest.

But you've got to get them to collect the right input. And there's lots of interesting computational work going on now where people are using various computational models to say, what do we have to build into, say, a convolutional neural network or some other kind of computational model to get it to do some complicated thing? I just came from a job talk the last hour-- really amazing talk-- where the guy is showing that if you build in, basically, curiosity early on in a network, you get much more general learners than if you build in a bunch of goals for a developing network to seek.

anyway it's a very active area, and the paper that I just decided to assign to you guys, just kind of skim it and get the gist. The basic idea-- this is from Shimon Ullman, who is a very deep thinker in this field. And he argues that hands are very important in infants. Faces are important, but so are hands, because hands do stuff. And we're social primates, and we want to learn from other social primates like our parents. And watching their hands is extremely informative.

Whatever they're doing with their hands is probably stuff we need to learn about. And further, we need to know where they're looking, right? So gaze perception. I think I did this demo before. If I'm talking to you guys, and I start doing that, it's really hard, even though you know I'm just faking you out, not to have your attention pulled over there, and infants need to learn that as well.

So Shimon Ullman's basic idea is that you can start with an extremely rudimentary system, and all you have to build in is this idea that he calls "mover," right? So the idea is that if you look in a whole set of, say, YouTube videos, and you just look for patches of the image that are moving, that's no good. It won't be a hand. It might be a whole animal, or a face, or something else.

But if you look in YouTube videos, a proxy for natural experience-- it's OK; it's not perfect, but it's something- you look for a patch of the image that moves over and then causes another previously-stationary image patch to move. That's what happens when we pick stuff up, OK? And so his idea is you can build in this extremely simple thing-- Mover, which is a very simple visual algorithm, can find image patches and move over and cause another image patch-- or then the two image patches move together.

And Mover will enable you to identify hands in images pretty well. He looks in YouTube videos and shows that it's really good at picking out hands. And then, further, once you've picked out hands, that's a really important teaching signal in teaching you to read gaze. Because often, people look at their hands before they do things with them, yeah?

So the idea is there's a very active ferment now in computational modeling saying, how can we start with just the most rudimentary, minimalist stuff that has to be built in, and then build on experience to get the rest from there? Is that idea clear? It's worth reading that paper, though. It's beautifully written. He's brilliant.

OK, so that's one end of the spectrum. Nobody thinks that you learn absolutely everything from experience. You've got to build in something. Plus, we know all those neurons are there at birth. And so the idea is some version-- the minimalist nativist view says you build in a few very rudimentary things, and they're enough to bootstrap learning.

OK, on the other end of the spectrum, you might think-- and many have proposed-- that we're born with a nearly adult-like system that only needs fine-tuning from experience, right? Nobody thinks that zero experience is necessary. That would be kind of crazy, or implausible. But on the other extreme, this view is that most of the stuff is built-in.

OK, everybody get the theoretical space here that we're considering? OK, so what kind of data can constrain these questions? Well, one obvious question is, what is present at birth? What is the initial state-- or as close as we can get to it? Then we can ask, how does the system change over time from birth onward? And then we can ask, what are the causal roles of experience and biological maturation in that change after birth?

So that's the whole set of questions we'd need to answer to understand how development works. And a very central-- if not the central-- challenge of development is that experience and maturation are deeply confounded as you look from birth onward, right? So five-year-olds are both more mature-- they've had more time for their biological systems to wire themselves up, including their bodies, and their brains, and the whole bit-- and maybe some of that is just on a maturation kind of autopilot. But they've also had a lot more experience.

So one of the central challenges of development is trying to figure out how those later stages-- like two months old, one year old, 10 years old-- how those changes that happen between birth and those stages can-- how can we tease apart which of that came from just maturation and which came from experience?

All right, OK. Importantly, things that happen well after birth need not be learned, right? So think about puberty. Puberty is going to happen around 10, 11, 12. And OK, you've got to eat and have some basic inputs to your system, but it's pretty much going to happen. It's not a product of what you were taught or the particular information that landed on your sensory receptors. I'm sure there's some obscure influences that I don't know about, but mostly, it's on a developmental autopilot. It's just going to happen.

OK, so keep in mind-- this is really important-- that things that happen well after birth aren't necessarily learned. It might be just maturation that's continuing, right? OK, just as being 5 feet tall versus a foot and 1/2 tall isn't really learned. It's just a maturation program that unfolds.

OK, so we can ask these three questions both behaviorally and naturally. And ultimately, we want them to tell the same story. When I said there's some chaos in this field right now, I mean that basically, they're not converging very well yet, but that's fun-- sort of. [LAUGHS] Sometimes it's aggravating, but mostly, it's fun.

OK, so let's start with some behavioral data. So let's consider the initial state of face perception in newborns. OK, so we can ask, what kind of perceptual, face perceptual abilities are present in newborns? And we can ask whether they can detect a face-- that is, discriminate a face from a non-face, whether it's a body, or an object, or something else.

We can ask about preferred attention to faces. Do they, do newborns want to look at faces more than non-faces? We can ask about the ability to recognize faces, to discriminate one face from another, OK? And we can ask about the ability to recognize faces across image changes.

So we spent a lot of time in the first few lectures talking about the central problem of invariance in vision-- about, how do you know that this image that you're looking at here is the same person as that image, even though those are very different images? And actually, this image on your retina right now is more different than this image on your retina than if we got one of you and came up-- had you come up here and had you look forward.

So the image changes that result from a change in orientation are greater than the image changes that result from a change in identity. So it's a big computational challenge. When is that solved?

And then, there are these so-called signatures of face perception that we've talked about a little bit-- for example, the inversion effect. Recall the inversion effect is larger in magnitude for faces than non-faces. So we can ask when those things develop.

OK, so let's start with face detection and preferred attention to faces. Well, so classic studies from the early '90s, and actually, some of them going back to the '70s, did the following very low-tech thing-- a low-tech drawing of a low-tech experiment. You take a newborn infant. In this case, they're less than an hour old, right?

You've got to set up in maternity wards. You want the data, that's what you do. Of course, you have to ask the parents and all of that.

But then, you take this infant and you sit them on a person's lap with a video camera overhead, and you move different objects over the infant's head, OK? And the different objects that were moved, in this case, were patterns that were drawn on this paddle that's moved over the infant's head. And the pattern could be a schematic face like that, a scrambled schematic face like that, and a blank with nothing in it.

And what you measure is, how far does the infant turn their head or their eyes following that paddle as you move it over them. OK, nice low-tech measure. And what you find is they turn their heads and their eyes farther when it's an actual schematic face than when it's a scrambled schematic face or a blank, within an hour of birth.

Then you can still say, well, their parents probably smiled at them quickly before they were snatched away to do the experiment, so they had some face experience, but boy, not a whole lot. And this is a very abstract face here. So this has long been taken as one of the key bits of evidence that something seems likely to be innate about faces, OK?

But now, what needs to be innate for that? And it's a bizarre thing, where this happens in the first two months of life and goes away. And there's a lot of consideration of what that means. Maybe the first two months is enough to bootstrap learning in the way I was just talking about-- bootstrapping, getting attention to the right places. But there's also a huge literature on this phenomenon where there's a big debate about exactly how simple those cues need to be.

So people have done many variations of this and one dominant story is that all you need is a pattern that has more stuff on the top than on the bottom, OK? And that's enough that infants will follow this more than that. And the idea is that in the visual environment of an infant, that's sufficient to pick out faces.

So there's been pushback against this view as well. It's probably a little more complicated than that. We won't go down the rabbit hole of all those details, but whatever it is, it's pretty simple. So this is another example of what I was mentioning before with the Ullman case. This is a case where it may be possible to build in something pretty basic-- a pretty basic template-- and then let learning take it from there. Make sense? If the infants are looking at faces, then they can use some kind of synaptic plasticity, whatever, and learn from their experience to discriminate one face from another.

OK, so these things are present within a day or two. What about discrimination of individual identity? First problem, how are we going to be able to tell what a newborn can see?

And so I didn't want you guys to be to thrown by this method in the last assignment, so I told you where there's a version of the explanation I'm just going to give. So if you already watched that, my apologies. You can read your email for a minute.

So the classic experiment-- a classic experiment-- that enabled us to really ask how a newborn, non-verbal infant, what they see in the world, is done by Kellman and Spelke. Liz Spelke up at Harvard was at the forefront of getting this method to really tell us a huge great deal about what infants see and understand about the world. And this method that I'm about to show to you has been the basis of what's sometimes called "The Infancy Revolution," which is basically the insight that, actually, infants know a lot.

Their perceptual systems are really sophisticated. They know about physics. They know all kinds of social stuff. Within a few months of life, they know a lot. And that's been a radical change in our understanding of development based on just behavioral work.

So here's the method. OK, so what Spelke did-- I always forget to bring the demo. Hang on one moment. We don't need much. OK, so she showed infants stuff like this, OK? The two hands are not there. You just arranged to see this, OK?

So even if you hadn't seen me, imagine if you hadn't seen me pick up the phone and the pen, and you didn't already know what they were, and you're seeing this, OK? That's what they see, OK? So now, the question is, when infants see that, do they think that that's this-- thing behind a rectangle-- or do they think it's two separate bits moving behind the rectangle? It could be two separate bits moving together, right? Everybody get the question?

OK, so how would we know what the infants thought was back there? OK, well, we use what's known as habituation of looking time. Again, you sit the infant on a parent's lap, and you show them stuff, and you just measure how long they look. It's magnificently low-tech but really profound.

OK, so what we're going to show here is how long the infant looks on each trial as a function of how many times you do it. So you show the infant this the first time, and they look for 40 seconds. That's a long time. You show them again, they look for 35 seconds, and so forth. And by the fifth or sixth time, the infant is bored. Like been there, done that, bored, right?

OK, now they're bored. Now we have a moment to say, OK, what did you think it was? And so now, what you can ask is, what do they think-- you then show them either this or this, and you ask them which of those they're bored to, right? So the idea is if, when looking at this, they thought there was a continuous line behind the occluder, then they should be more bored by this. But if they thought that was two separate pieces, then they should be more bored by that. Does that makes sense?

Because it's the same thing they're already bored with. I mean, it's not exactly the same. The occluder isn't there, right? But it's more similar.

OK, so here's the data. Here's what they find. So what does that mean? What do the infants see when you show them this? It's right there in the data.

Look at the first test trial here. This is the first test trial, when you show the complete line or the broken line. What do they see here? Yeah, they saw the complete one. That's why, when you present the complete one again, they're still bored-- already saw that.

Make sense? So isn't that awesome? It's so low-tech and so simple, but this is how you can ask an infant, what do you see? Yeah?

AUDIENCE: Why does it switch positions in the second trial?

NANCY KANWISHER: You know, frankly, I never understand why infant and development people do a second and third trial. Seems to me by this point, the jig is up. I think it's just because it's hard to get enough infants, and you need more data, and so they do a second and third trial. But to me, that's the diagnostic one. And that's probably not a significant switch, but whatever's going on out there is obviously much less important than this.

Heather, do you have a better answer than that? Why do they do those other trials? They always do, and it just seems like, what? [LAUGHS]

AUDIENCE: I don't know.

NANCY KANWISHER: Yeah, I don't either.

AUDIENCE: [INAUDIBLE]?

NANCY KANWISHER: Oh, you do it every which way, but you do it pretty fast. They get bored, and you don't want to wait half an hour and come back, right? I mean, you could do that. Then that would be a memory question, right? Yeah, Jimmy.

AUDIENCE: Just curious, is this conserved between [INAUDIBLE] do they all see complete lines, where [INAUDIBLE]?

NANCY KANWISHER: It's pretty robust. Well, OK, so first of all, these methods are awesome, that you can learn these deep things about perception in infants. But these data are noisy as hell. There's no error bars on this plot, but I bet if there were, you'd have to run a lot of infants to get to the point where you reach significance.

Because a lot of times, the infants will just throw up, or they'll just do what-- they do all kinds of random things. So the data are extremely noisy, and it's very hard to get enough data with an infant to say anything about the difference between one infant and another. By the way, there's a very exciting development going on in this department right now, where Kim Scott, who's a former grad student of this department, has figured out how to do looking time experiments like this online, OK?

And that's hugely important, because the number one bottleneck in this kind of developmental research has been finding enough infants, or getting enough data per infant. And so I think that she's going to just crack it wide open. Talia?

AUDIENCE: I guess I'm a little bit confused how we know what the infant really saw based on how long it looked at something. Could it be that maybe they look at like-- maybe they look at the broken sticks longer, because it's like what they thought was behind it, so they're now excited that they get to see what's--

NANCY KANWISHER: Maybe, but then, why would you get this? So we know from this that the more familiar it looks, the less time they look. So you would have to come up with-- yeah, there's wiggle room in these data, but you'd have to come-- your account would have to say, why would they look less, and less, and less long when we repeat the exact same thing, right?

And you could tell a story like, OK, it's a little bit different, because the occluder isn't there. But it's a little bit the same, and that's kind of edgy and fun. Or you could tell another story, but I think the bulk of the developmental literature shows that when you do this kind of stuff, it's a change that makes infants look more. I'm going to go on unless there are questions of clarification, just because there's so much other cool stuff.

OK, so how can we use this to study face recognition? That was just a sidebar on the method. OK, so there's a lab in Italy where they have an infant psychology lab next to a maternity ward, and they've been doing all these awesome studies. OK, and they test 1-3-day-old infants.

And so one of the things they did is show infants, just like the paradigm I just showed you. They show the infant the same face again, and again, and again. That's the habituation phase. And then, this is a slightly different one. You give them a choice of whether they-- actually, you don't give them a choice. I take it back. Yeah, you show this condition or that condition, and you see how long they look at each across different infants.

And so this is the same person from a different viewpoint. Actually, pretty subtle, as we discussed with the Jenkins study way back. And that's a different person from that viewpoint. And what they found is that-- it's hard to see, but a very low P level means that there's a significant difference in how much the infants looked at those two.

So that's pretty amazing. 1-3-day-old infants can apparently recognize the identity of a face, a novel individual they don't already know, with similar-looking faces, without hair, and across view changes. Wow, right? So that's pretty impressive.

OK, and so then, they've done all kinds of other variants. If you have them rotate all the way from front profile, there's no longer a significant difference. Infants can't do that. And then they do all kinds of other variants. If you show them the same individual and then habituate to that, they can tell the difference between viewpoint.

That's the same, and that's different, even though it's the same identity. So you can use this to test what they think is same or different, which is a deep question to ask. If you're interested in representations and cognition, the question of what an infant, or an animal, or a bunch of neurons thinks is the same or different is the essence of characterizing what it represents. Yeah, Quiley?

AUDIENCE: [INAUDIBLE] the rotated face [INAUDIBLE]?

NANCY KANWISHER: Down here? Yeah. Yeah, they do. So here, basically, it's either identical, or it's different in some respect. So given a choice, when it's rotated anyway, the familiar one is more similar. But down here, this one is more similar in viewpoint. Yeah?

AUDIENCE: And these are not like the [INAUDIBLE] in such [INAUDIBLE] the student, the [INAUDIBLE]

NANCY KANWISHER: Sorry, say it again? They're not like--

AUDIENCE: The children have seen faces before this.

NANCY KANWISHER: Well, as little as possible. As I say, I mean, they've seen some, but not very many, and they haven't seen these faces. So when you're trying to get those innateness questions, you go as close to birth as you can, but you can't usually go into the very moment of birth itself, right? And so there's usually some experience, and it's a challenge, but this is pretty early. Yeah?

AUDIENCE: So couldn't that just meant that the face perception network is just like-- it develops really quickly, right after [INAUDIBLE].

NANCY KANWISHER: It could, it could. Based on these data alone, it could. That's considered kind of unlikely, but I agree that that's consistent with these data. In the first two days of life, the whole thing wires itself up. That's be pretty unusual. It's not really consistent with those samples of neurons that people have looked at elsewhere in the brain, but maybe there's a special little circuit that just wires itself up really fast. So not likely, but possible, OK?

All right, now, you might say, well, maybe there's some kind of simple visual features that are short of an actual face representation here. This doesn't show us that this is something about faces per se, even though it can generalize across viewpoints. So it's not just pixel intensity, right?

So what is the classic way we asked this question in face perception, where we ask, is this really something about faces, or is it something about the low-level perceptual properties of the face?

AUDIENCE: Turn it upside down?

NANCY KANWISHER: Yeah, turn it upside down. God's gift to the face researcher, right? So-- oh, I guess that was not on this slide. OK, right? OK, so now, in the next experiment, they present whole faces, or just the internal features without hair, or just the external features without hair.

So the infants can do that at the top. They know those two are different. They can do this here, and they can do that there. OK, not too shocking yet. Just tells you any of those cues can support performance.

But now, we can ask, is that just pattern-matching? No, it's not. Because when you turn them upside down, you find that only-- let's see, it's only performance in this case that suffers when you turn them upside-down, not this case or that case.

OK, so that shows that there are a variety of cues here that infants could be using, but when you show them just the internal features-- the actual face proper-- that part, the ability to do this discrimination, goes away when you turn it upside down. So that part, at least, seems to be at least somewhat face-specific, or has the signature of face-specific processing. Make sense?

OK, I mean, as a pattern, it'd be just as easy to recognize this upside-down and distinguish it from that upside-down, if it was just the pixels you were registering. But if you were doing face processing that's something like adult face processing, you'd expect that inversion effect. OK, all right, so where are we?

And I should just say, even this is actively debated. In fact, the author of this study considers this not to be evidence that that processing is face-specific. I think she's got some of the strongest evidence ever, but she's got some counterargument about how in the inverted faces, they don't look as long in the situation phase. And so it's like I'm telling you these cool methods, but boy, every one of them can be fought over.

OK, so where are? We've just shown that discrimination of individual identity is present in very young newborns, recognition across viewpoints, and inversion effects are all present within the first few days of life. OK, so newborns have very impressive face perception abilities, and that's particularly surprising given that their acuity is terrible, right? the vision is really blurry for young infants, so it's amazing that they can do these things.

But now, there's room for quibbling about whether this is really a face-specific system. So the inversion effect is suggestive, but they haven't totally nailed the case about what's being tapped into here. Is it really face perception per se-- something specific to face perception-- or is it some more generic kind of object perception? OK, and further, we want to know what happens after that.

OK, so you don't need to memorize this table. I'm just going to make a few simple points with it. There are lots and lots of studies where people have tested behaviorally all kinds of different aspects of face perception, and the basic story is that by age four, you see the little smiley face means that this adult-like property of the face perception system is present by age four. So all of those signatures of face perception that are present in adults are present by age four, OK?

And in fact, much of the action is much before that. You can see that all of these things are present at the earliest age they've ever been tested. The little square means nobody's tested it at that age. So all this stuff is developing very fast, right?

OK, one particularly important thing here that you read about a little bit, but that I want to take a moment to make sure you understand because it's so interesting and cool, is the phenomenon of perceptual narrowing, OK? And this happens in face perception, and it happens in phoneme perception in speech. And I'm going to do a demo here.

So I'm going to show you a monkey face briefly. OK, it's going to come on in a second, and you just look at It. Here we go. Boom, there it is, OK?

OK, in a moment, I'm going to show you another monkey face, and you're going to shout out same if you think it's the same, and different if you think it's different, and, huh, if you don't know. How many people don't know? Yeah, it's different, right?

OK, well, OK, maybe that was too hard. Let's try it with a human, OK? Remember how hard that was? Now let's try it with a human face. I'm going to show you a human face. Everybody ready? Here we go. OK? OK, and I'm going to show you another human, and you're going to say, is it same or different? Here we go.

Duh! Easy, right? OK, so here's the amazing thing. You were better at that monkey face task when you were six months old. You could do that monkey face task when you were six months old. One of the things that you have learned from experience is that you don't need that information, and you threw away your ability to do that, but you had it when you were six months old. Isn't that awesome and interesting? That's called perceptual narrowing.

So the experiments, in particular, do the following. You use that preferential looking paradigm-- the preferential looking to the novel face in infants-- as your measure of discrimination ability. What can they discriminate? And so you show two human faces-- two different individuals, like this.

And so now, what you see is that at six months, nine months, and adulthood, people preferentially look to the novel face more than the familiar face, OK? That's just what we've just done. People like to look at the new thing, not the old thing, OK?

However, if we do six months, nine months-- oh, yeah, that's what we just said. OK, they can do that. So now, if you try this on monkey faces, you find that adults are like us. We're barely able to tell the familiar from the novel. We're not so good at monkey face discrimination. Nine-months-old are the same. But at six months, infants can discriminate the monkey faces, and you could, too, if somebody had asked you.

So there's a very similar phenomena with phonemes. Those of you who are not native speakers of English maybe aware of some phonemes in English, if you learned it relatively late, that are hard for you to discriminate. There are sounds in Hindi-- I forget, it's like a "da" and a "ta," that sound identical to me, but that are just like completely obviously different to native Hindi speakers. And all languages have this.

So of the kinds of phonemes that are discriminated in any language in the world, you could discriminate all of those when you were six months old. And one of the things you do when you learn a language is just throw together in the same bag things that are actually different that other people can discriminate if your language doesn't discriminate it, OK? And so you get that with phonemes, and you get it with faces. OK, everybody get what perceptual narrowing is? OK.

OK, you also get this-- I mentioned this way back-- with perceiving faces of other races, right? Not just faces of other species, but if you grow up in an environment where you're only exposed to races A, B, and C, and you later have to discriminate faces of races D, E, and F, you're not so good at it, right? All the same deal.

OK, all right. So how would we know whether this change between six months and older is just maturation-- it's just some kind of developmental program that's going on autopilot independent of what you see, or whether it's learned from experience? Josh?

AUDIENCE: You control for experience.

NANCY KANWISHER: You control for experience, absolutely, like the Sugita paper. OK, so we'll get to that in a second. So we started with these key questions-- what is the initial state at birth, and we showed impressive perceptual abilities within a few days, although people dispute whether those abilities are a face-specific system. And we don't know much about what that system is, other than it works surprisingly well given the low acuity.

And we showed that how it changes after that, there's perceptual narrowing between six and 12 months, but a great deal is not known about what happens then. And so now, we're onto this question of how are we going to un-confound what changes after birth, whether it's maturation or experience. And I'm not going to have time to get to these other awesome methods. We're going to focus on controlled rearing, of what you read the Sugita paper.

OK, so just to remind you of the basics, most of you seemed to get the paper just fine. The big idea was again, using this preferential looking method, what Sugita et al. Showed is that when they reared monkeys for six, 12, or 24 months without ever letting them see a face, and then tested them on the very first session that they ever saw faces with preferential looking, they found that on the very first exposure to faces, the monkeys looked more at faces compared to novel objects, right?

They showed that face preference, sort of akin to infants looking at the paddle, and they discriminated between faces-- very similar faces-- with adult-like accuracy.

And this part, I don't know if you found it surprising, but when this paper came out I, was like, whoa, that is crazy, right? Because as I said, the whole space of sensible hypotheses is, OK, maybe a lot of stuff is innate, but you're still going to need experience to tone it up, for God's sake, right? Who would think the entire adult ability could exist without any experience at all?

So I don't know if you had that reaction, but I think that's a sensible reaction. It's a pretty astonishing finding in that paper. Unfortunately, there's one author on that paper. It was done once, and it's such a labor-intensive study that probably nobody will ever try to replicate it. So in the back of many people's minds is like, really? Can that really be true, or is there something funny here? So I hope somebody replicates it someday, but it hasn't been done yet.

OK, the other thing that you guys presumably noticed is there was perceptual narrowing in that study. There were many interesting things in there. It's actually quite a rich paper. But after the initial testing session, no matter how long the deprivation, the monkeys were then housed in either an environment with just humans or just monkeys.

And so whether that was 6, 12, or 24 months after birth of face deprivation, they then lost their ability, at that point, to discriminate the unexperienced faces, OK? So they went through perceptual narrowing.

Does that all make sense to you guys? You got that? Good. OK, all right. So anyway, that suggests that an awful lot of the face perception system is present without any exposure to faces, and that's pretty astonishing. What experience seems to do there is not create abilities, but eliminate them right for the species that you don't see.

OK, so first reaction is, really? Second reaction, is there any way to account for this in terms of some non-face-specific system? I think you can, but it takes some work, and the counter-explanations are really difficult. You can say, well, maybe this is all being carried by some more generic object system. They didn't test inverted faces, unfortunately, but if it was carried by a generic object system, why would you find the perceptual narrowing? Why would they have lost their ability for the unexperienced species? So I think that story is hard to tell.

And, of course, the other question I'm sure you guys are wondering is, what is going on in those monkeys' brains? Yeah, OK, so let's get to that. Let's talk about what we know about development of this system by looking at brains.

And first of all, there's been lots of work on this in older kids, age 5 and up, going back over a decade. And it's now clear that all of that basic machinery I showed you is present by age five, in most kids age five. It's continuing to change after that, but you can detect most of that stuff by age five, or six, or seven-- something like that.

OK, trouble is, that's cool, but age five is late with respect to experience and with respect to all those behavioral abilities that I showed you. So we need to go earlier. And so a couple of years ago, Rebecca Sachs-- who's straight up there, two floors up-- started scanning infants, OK? And this is-- as Heather can tell you-- almost impossible. It is right on the edge.

It took Rebecca and her lab many, many years of work over five years just to get the system going. There were all kinds of technical advances, like making scanning coils that were optimized for infants and comfortable for infants. Rebecca herself went to great lengths, including producing some of her own subjects. That's her son Arthur there and her two-- her grad student and postdoc who were working with her.

But all of this massive effort was worth it, because what they found was, first, for comparison, this is adults with a contrast of faces versus scenes, OK? So this is basically the PPA in blue responding more to scenes, and the FFA in here and some other face-selective bits responding more to faces in adults.

What do you see in six-month-old infants? It's astonishingly similar, right? You can really see a very similar layout of the functional organization of the brain already by six months. So that's a huge advance. That pushes way back the timeline by which these things had developed. Previously, everybody is talking about, oh, what changes after age five? Age five, come on? OK, it's mostly there by age six.

OK, now, importantly, these systems are not adult-like. Their selectivities are very different. Those regions are less selective in infants than they are in adults. But the spatial layout is there already by six months, and that, importantly, constrains-- whatever our model is of development that pushes it way back.

OK, so now, the next questions are, what is it about that region-- or those particular regions-- that makes them become face-specific already by six months? How does the face system know to take up residence in that systematic location in the brain, and what is the role of experience in their construction? And how could we ever answer this?

One way to answer that is to use an animal model, OK? So there's been-- yes.

AUDIENCE: OK, yeah, similar question about--

NANCY KANWISHER: I'm sorry, I didn't hear. About what?

AUDIENCE: General physical layout-- like why does your stomach always come in the same place, and would it maybe be the same mechanism that guides development of any organs and the layout of the body, [INAUDIBLE]?

NANCY KANWISHER: Yes. Now, I don't know much about how hearts, and kidneys, and livers develop, but my understanding is that's pretty much wired in. There's some chunks of DNA that tell you how to build a kidney and where to put it in your body, right? And so that is one of the hypotheses here.

It's a tempting hypothesis, right? There's all that structure. It's a very tempting hypothesis, but that doesn't mean it's necessarily right. Yeah, it absolutely is. It's a hypothesis we should consider and take seriously, yeah.

OK, so but we want data. We want to find out. OK, so animal models.

So starting a few years ago, Marge Livingstone over at Harvard Med School over there-- a couple of miles over there-- started doing these also really amazingly heroic studies where she was scanning infant monkeys.

OK, now, this is really hard to read, so let me tell you what we got here. We have the cortex. This is all the same animal at different time points, and each of these things is the cortex unfolded mathematically and flattened so you can see the whole thing.

I don't expect you to know what's where. I can barely tell myself. But if you look at it, what you see is at 81 days of age, there's just blue stuff. There's no orange stuff. The orange stuff is the face-selective response.

In fact, if you look down, you start to see, oh, that looks-- yeah, yeah, OK, that looks pretty systematic. It starts replicating after that. And so the claim is you don't see face selectivity until about 170 days after birth in monkeys.

OK, that's about here. Here's another monkey for comparison. If you stare at it, you'll see, OK, there's these systematic bits-- boom, boom, boom, boom-- and maybe a little hint at 170, but-- there's some garbage up there, but nothing systematic before that. Yeah?

AUDIENCE: So there's no control of the environment? This is like monkeys--

NANCY KANWISHER: Normal monkeys who have exposure to human faces and monkey faces hanging out in the lab, yeah. We haven't gotten to control rearing yet. It's coming. OK, first thing is just, when does it develop in monkeys?

OK, all right. So are you surprised by this? It's not there here, and it is there there. You should be surprised. Why are you surprised? This is what you guys predicted. Quiley?

AUDIENCE: I guess I'm surprised because they were able to discriminate.

NANCY KANWISHER: Yeah, what is up with that? Absolutely! The Sugida paper really made it look like that system was innate, right? No experience-- boom! They're fine. It was just behavior, but it was a good behavioral study. So why the hell isn't it here?

Everybody with the program on how surprising that is? OK, so a bunch of things. First of all-- and it gets stable after that, and replicable. Well, the first thing is one's a behavioral measure, and one's a neural measure. Maybe those fabulous behavioral measures weren't actually being driven by some face-specific system. Wouldn't that be sad, right?

I mean, they did lots of controls. It was a nice idea. I thought they did as well as they could, but who knows? Maybe those monkeys could do that task with some other system and they didn't need their face system for it. That's one possibility, right? Then, you could have the face system not develop till later, but the monkeys could do it before.

But the other thing is, notice that Sugita didn't test their monkeys until, with the youngest ones, six months of age. So maybe it just got wired up just before-- right there-- they were tested, OK? So it seemed contradictory at first, but it's not completely, literally contradictory, yeah?

OK, all right, so now, the fact that this stuff doesn't show up until here, does that mean that this face system requires experience to develop? You know the answer, because whenever I ask that question, the answer is always no. Why does that not imply that you need experience with faces to wire up?

It's tempting. You look at it, and it's like, OK, you had to look at faces all this time before you wired it up. Boom, there it is-- very tempting. But-- is it Jessica, no? Sorry, what's your name? Yeah.

AUDIENCE: Bele.

NANCY KANWISHER: Bele. Oh, sorry, you told me that like six times.

AUDIENCE: I could be merely due to mature, physical.

NANCY KANWISHER: Yeah, it could be just maturation. I keep making the same point, because it's important, right? Just because it shows up later doesn't mean it's learned, right? Maybe it's like puberty, or height, or something like that that's on some developmental program that's just going to unfold independent of what you see, OK?

So how would we find out? We would do controlled rearing. And that's exactly what these guys did, OK?

So in another paper that just came out a couple of years ago, they raised baby monkeys without ever letting them see a face. Much like Sugita did, they use welder's masks every time they were in the lab, so the monkeys never got to see faces. And like Sugita, they went to lengths to treat the monkeys nicely.

They heard the calls of their com-specifics, they got lots of attention, they had rich visual experience. They just didn't see faces. So it sounds kind of tragic and horrible at first, but it's actually not that bad. They had social contact and visual experience. They just never saw faces-- both this study and the Sugita study.

All right, OK, so they could hear and smell other monkeys. So the face-deprived monkeys saw no faces at all until 90 days old. And at that point, they went straight into the scanner, OK? And the first time they saw faces was inside an MRI machine getting scanned, OK?

So what do you think? Are the face-deprived monkeys going to show face patches? So there's no way to tell, because we have all these contradictory bits of evidence here, right? From Sugita, you might think yes. Hard to tell.

So let's just look at the data. OK So here first is a normally normally reared monkey 260 days old just for comparison. And those face patches in yellow in two different monkeys here, B4 and B5, left and right hemisphere. OK, so those yellow bits are the face patches. OK, normal 260-day-old monkey.

Now we're going to see a face-deprived monkey, 260 days old. This monkey was face-deprived that entire time up until scanning. No face patches. The plot thickens-- no face patches at all.

So these guys published this paper in a very high-profile journal and said-- this is the title of paper-- "Seeing faces is necessary for face-domain formation," OK? Face domain just means face-selective patch. OK, everybody see? You deprive them of face experience, you don't see it.

OK, that's pretty interesting, and it strongly suggests that the face system is not innate but depends on face experience, doesn't it? Rare case where the answer is, yes, it does. And it feels like it contradicts the Sugita finding, right?

But not exactly. You could still wiggle out of it, right? You could say, OK, the thing that Sugita was studying doesn't use those patches, so it's not flat out contradictory. Sugita was measuring behavior; these guys are looking at brains. So it's kind of unsatisfying, but it's, in principle, possible.

Me and everyone else has been nudging these guys to run the Sugita behavioral experiment on your monkeys, please! And I gather that's getting going, but I haven't seen any of the data yet. So we don't know how that's going to resolve.

OK, so let's take stock. What is the initial state? We show with behavior that there is both attention to faces and-- present in newborn humans, and face specificity seems like it, but it's not totally nailed, whereas functional MRI says there's no evidence for face specificity at birth-- at least in monkeys, right? That's other.

Yeah, OK, so how are we going to reconcile this with all the behavioral results I showed you, that there seems to be a lot of face abilities present in newborns? Well, one possibility is that face specificity exists behaviorally, but MRI fails-- oh, sorry, face specificity exists in the brain, but MRI fails to detect it. There's a whole rigmarole about whether functional MRI works well in infants. It's barely possible, as I mentioned.

It's also hard with infant monkeys. Their blood flow regulation is different. They're squirming and wiggling. There are a million issues with scanning babies, whether human or monkey. And so you could always say, well, it was there, and just the MRI data are just kind of crappy, or blood flow regulation to the brain develops later-- an argument many people have made.

However, a paper was published last week that argues against that hypothesis. The same group just showed that the somatosensory touch system is totally in place by 11 days in baby monkeys. So that suggests that you can get really nice functional MRI data at 11 days of age in baby monkeys, and it makes it less likely that this is some kind of spurious failure to detect something that was actually there.

I'm not going to test you on every little detail here. I want you to think about the logic of how you can ask these questions. OK, the other possibility is that the face abilities that we showed behaviorally are using some more generic object recognition system, not using this face-selective system in the brain.

OK, so how does it change over time? Well, we showed that behaviorally-- in humans, at least-- all the hallmarks of face-specific processing or present by age four, and we get this perceptual narrowing between six and 12 months. But then we showed that with functional MRI-- at least in monkeys-- there's no evidence for face specificity before 200 days, right?

AUDIENCE: [INAUDIBLE]?

NANCY KANWISHER: I gather they're working on it, but I haven't seen any of the data yet, yeah. OK, so that lack of face specificity is consistent with the idea that all that human early face recognition behavior is driven by a different system-- because they don't have their face system yet, presumably. But it's also consistent with this idea that it's just failing to be detected.

Even though I said that's probably not true, given you can detect other stuff, it might be true here. The ability to see things with MRI depends where you're looking at the brain.

OK, so what about these causal roles of structured experience and biological maturation? OK, so we argued that early face experience isn't crucial for the face recognition system. That was the Sugita paper you read. But now, functional MRI is showing that face experience is necessary for the development of face patches, at least in monkeys.

And so a very sensible reaction, is what, what, what? How are we going to make sense of this? This is a big conundrum. It's going to get worse on Monday, where there's yet more contradictory data. And further, if that face system isn't innate, then what, if anything, is innate about face perception, right?

So maybe what all these data are telling us is, not that much. Maybe just a biased look at faces, or some very simple image template that's sufficient in the environment of infants to get them to look at faces. So there's a lot of studies I didn't have time to work into this lecture, where people stick cameras on the foreheads of newborns, and they collect, what is the typical visual experience of a newborn?

And then, you can take that-- you can take that experience and ask, what kind of-- you can write machine learning code to say, what would you have to build in to reliably pick out the faces in typical infant input? And it's probably not that complicated, because infants don't see that many different kinds of things, right? OK.

We showed early visual discrimination abilities of faces in newborn infants. But again, it's not clear that's part of the face-specific system. And we showed that the face patches-- at least in monkeys-- seem to require experience, OK? I'm just recapping here.

But now, there's this big question of, how do those face patches know where to develop in the brain? Like here they are in humans, these little purple blobs. The occipital face area I've got two different fusiform face areas, because various people think there's two. I'm not sure. I don't really care; doesn't matter.

Anyway, how do they know to land right there? OK, we keep bringing up this question and dancing around it, but so far, I've given no basis for thinking about this. One possibility is that infants-- monkey and humans-- are born with some earlier kind of selectivity of that patch of brain. It's not a whole face template. It's not a whole face system. Maybe it's a bias for curvy things, right?

And then, somehow, that makes the faces land there, and the system wires itself up. It's not exactly clear how that would go. But that's one kind of story.

Another story is based on this fact I told you at the beginning of the lecture, which is most of the long-range connectivity of the brain is present at birth. And so maybe the particular connections of that patch of brain are already there at birth, and maybe that patch of connections are sufficient to somehow gate the input to that system and arrange for it to end up being face-specific, OK? So this is a very active area of investigation, and there's other very active, ongoing kinds of investigation where people are trying to understand how this development might work.

One way people are looking at this-- I mentioned this briefly, but I think it's super exciting-- is people are asking with deep nets and other kinds of modeling, what do you have to build into a system to get it to produce face recognition abilities? If you're trying to make a deep net, you're trying to make it really good at face recognition, do you need to give it a template of faces? Do you need to give it only experience with faces? What do you need to build into it to get it to be really good, right?

And so that's a very active area of investigation. And you can actually-- with some ongoing work with Jim DiCarlo's lab, we're asking, OK, deep nets don't have topography. Next door units in a deep net doesn't mean anything, what's next door versus far apart. Location doesn't mean anything in a deep net, but you can make it mean something. And then you can ask when, and whether, and how, and why you get face patches in a deep net and what computational role they serve.

Well, totally weirdly, I'm finishing early, but I'm not going to finish. I'll take questions, and then I'll maybe add a little bit more. I think that was all I had here, right. Any questions about all this?

If it feels a little bit chaotic-- I've sort of said x and not x, and x, although they're not exactly x and not x. They're just-- yeah, Sirdul.

AUDIENCE: So the fMRI tends to [INAUDIBLE] activity in boxes, right? [INAUDIBLE] you said contain millions of neurons. So is it possible that the neurons that are specific to faces are distributed at an early age throughout the brain, and somehow the function for them--

NANCY KANWISHER: They get spatially clustered.

AUDIENCE: Yeah, but the neurons themselves already exist at birth?

NANCY KANWISHER: Absolutely. That's a great hypothesis. It's absolutely possible.

Everybody get the idea? You have all those face neurons at birth, and maybe they're face-specific at birth, but they're spatially spread out. And then they have to find each other and hang out together next to each other before you ever get an MRI signal.

It's totally possible logically. It seems to be quite unlikely actually, because it would be very hard for all those neurons, with their necessary connections-- which is, after all, how they become face-specific, is what their inputs are and what they're connected to-- it'd be very hard for them to migrate spatially across the brain maintaining their connections. Yes, you're going to push back? Go for it.

AUDIENCE: Well, I think [INAUDIBLE] But since you said [INAUDIBLE], they care about what their neighbors are doing. So maybe it's just like a neighboring neuron's properties, but the [INAUDIBLE] in this chain moves it back until that brief [INAUDIBLE]. But that progression is the most efficient way to pop up.

NANCY KANWISHER: It's totally possible, totally possible, absolutely. Yep, other questions? And this is wide open. Nobody knows, right?

Let me just see what else I have time for briefly. So funny, I took out all these slides because I just thought I'm not going to run out of time, and go over, and drive everyone crazy. I moved all this stuff to the other lecture. Maybe I will just-- All right, hang on, let me just glance at the lineup for Wednesday. Yeah?

AUDIENCE: Is there-- the perceptual narrowing is really surprising and fascinating. Does anybody have a model for how that processing might work or what it might be for? I mean, it feels like a lot of it-- assumptions, or the common sense assumptions when we look at fMRI, and when we look at neural signals is that they all mean positive things. But maybe a lot of that signals, a lot of activity might be inhibitory-- might be the opposite.

NANCY KANWISHER: Totally, yeah. But how would that explain perceptual narrowing?

AUDIENCE: Well, if what you're learning is what to ignore, then maybe it takes a lot of effort to ignore things. And not really sure. I'm not sure exactly, yeah.

NANCY KANWISHER: No, it's a good point. Like I mentioned at the beginning, one of the limitations of functional MRI is we don't know what the actual neurophysiological basis of the bold signal is. It could be anything that increases your metabolic costs, and hence changes blood flow.

But one of the things that increases metabolic costs is inhibiting other neurons. And so way back in the early days of, actually, PET imaging, before functional MRI came along, there was an early proto version of a face-specific paper. It didn't nail everything, but it was not bad for 1981, when I think it was published. And the person who did that paper, Justine Sergent argued that it's very, very ambiguous what it means to find a hotspot in the brain where the activity-- the metabolic activity-- is higher, say, when you look at faces than objects.

And her point was, that could be the part of the brain that really sucks at face recognition. That's the part that's going, ah, I can't deal with this thing! What is this thing, right! That's really bad at it, and the neurons are firing a lot. It's sort of facetious, but sort of not. And it's probably not the right account, but it is an important reminder that we actually don't know what actual kind of neural activity is driving those things and whether it's excitatory or inhibitory, absolutely.

Hang on one second. I feel like there was another part of what you said that I was going to engage on.

AUDIENCE: No, It feels like somehow, possibly, connected to the perception [INAUDIBLE].

NANCY KANWISHER: Yeah. Yeah, possibly. We'd have to work it out.

AUDIENCE: In one of the lectures [INAUDIBLE],

NANCY KANWISHER: Yeah.

AUDIENCE: And then, [INAUDIBLE]

NANCY KANWISHER: Yes.

AUDIENCE: [INAUDIBLE]

NANCY KANWISHER: Yes.

AUDIENCE: Then, I'm a bit confused, because, like, you said before, almost like all the wiring is [INAUDIBLE].

NANCY KANWISHER: OK, long-range wiring.

AUDIENCE: Oh.

NANCY KANWISHER: OK? Which is very different than all the circuits that live in each little patch of cortex. Remember, I showed you this big change in the complexity of neurons and the number of connections. Oops, looks like we've lost it now.

So they're changing a lot within each patch of cortex, right? So those local circuits that are doing computations are surely changing a lot over the first couple of years. It's just the long-range connections between that patch and some remote region-- where it gets its inputs and where it sends its outputs to. But hang on a second. You asked something-- there's also very interesting stuff about the other race effect. I did mention that a month ago or so, didn't I? Which is another version of this perceptual narrowing.

And in fact, a friend of mine who's a great face researcher has not yet published this paper, but she found the following. Totally, that's right-- you mentioned the adoption studies. So what she has done is ask-- did I tell you guys about this already? I feel like I did, but maybe not.

Anyway, so what you find is that people are-- they all look alike. Whoever they are, if you've seen fewer of them than whoever we are, you are less good at discriminating them. That's just what it is.

But so Elinor McKone asked if there's a developmental timeline for getting your way out of the other race effect. And so what she did was-- she's in Australia, and she got various communities of people who move from dominant racial composition x to dominant racial composition y and who made that move at different ages.

And so what she finds is that, actually, much like learning the phonemes of a language-- which, even if you-- hey, let me back up a second. I said that with phonemes, you can discriminate all those phonemes of all the world's languages at birth, and by six months, you've thrown away the abilities for all the phonemes you can't discriminate.

However, if you then go learn a foreign language sometime between six months and, say, 12, you can become a native speaker. So you can learn them back, right? So there's another window-- it gets narrowed-- but you still have a window to learn them back, OK? After you're like 12, 15, whatever, forget it. You won't be a native speaker, right?

Same deal with the other race effect. This is exactly what McKone found with the other race effect. People who moved to a different dominant racial community learned the ability to natively discriminate people in that other race if they moved before age 12. So it really seems like there's some general ability.

Oh, I remember David's other question. Why does this make sense? I don't know exactly why it makes sense, but certainly, neural activity is expensive metabolically, and we don't want to make discriminations we don't have to. And so it can be just that the nervous system is learning what kinds of discriminations it needs to make in its environment and what kind it doesn't, right?

And with the case of phonemes, it's actually part of what you're doing in speech perception, is you want to know, every time I say "ba," it sounds different in all different contexts. And so part of the essence of the difficulty in speech recognition is understanding that all those different "ba"s are the same sound, right?

And so part of what perceptual narrowing might be doing is saying all those things-- "da," "ta," whatever it is in Hindi-- those are all going to count as the same thing. And that's going to help you process speech in your native language but hinder when you try to learn a foreign language. Yeah?

AUDIENCE: So something I'm wondering with perceptual narrowing is how general like the starting point is. So I'm basically wondering-- because in the studies, they compared human and monkey faces.

NANCY KANWISHER: Yeah.

AUDIENCE: And I'm wondering if there's any correlation with how similar the DNA, like how they're able to discriminate between the faces. So whether that's different types of monkeys, or different animals--

NANCY KANWISHER: I'm not getting it, right? Early on, you can discriminate both, right? So what's the question?

AUDIENCE: So I'm wondering what other animals can they discriminate, and what--

NANCY KANWISHER: I see, I see. How far does it go? Yeah, good question. I don't know that anybody has asked little kids if they can discriminate other kinds of faces other than monkey faces. I'm sure there's some limit to it-- like fish faces? Probably, I don't know, yeah.

But there's also, actually, in terms of that extended-- I don't know the answer to that, yeah. There's going to be some limit. But in terms of the question of how long can you relearn those abilities or maintain them, it's not like perceptual narrowing is going to happen at six months automatically.

So if you manipulate it-- so the studies on humans, if you send-- I feel like I said this in here before, but it must have been somewhere else-- if you send parents home with books with monkey pictures in them, parents of six-month-olds, and you say, look, every night, go through the book with your kids and say, there's Monkey Joe, and there's Monkey Bob, and there's Monkey Whoever with your kids, and you have them do that from age six months to 12 months, they don't perceptually narrow, because they continue to get that experience, right?

Interestingly, if the parents go home and just say, look, look, that doesn't do it. You have to give them some social cue that is essentially saying, this thing is different from that thing. And if you do that with an infant, even when they don't really understand language much, they get that cue, and they learn to discriminate-- or they maintain their ability to discriminate monkey faces. Yeah?

AUDIENCE: Does that hold up even when they're past the 12 months old?

NANCY KANWISHER: Well, I'm guessing it will be just like the case that McKone showed with other race effects, right? I'm guessing the other species effect will be like the other race effect in that if you, say, start working in a monkey lab when you're eight years old-- that would be weird, but you could-- or you-- I don't know, whatever.

Anyway, that you would be able to relearn it on the same time scale that you would relearn-- relearn, or learn for the first time, previously unfamiliar races of faces. But maybe those are slightly different timelines. Yeah?

AUDIENCE: Could you do something similar with the monkey faces, but with phonemes in different languages?

NANCY KANWISHER: I'm sure you can, and I'm sure that has been done, but I don't know that literature. Yeah. Yeah, you mean like keep-- well, OK. I mean, it essentially does get done, right?

So kids who stay in environments-- let me think about this. Well, certainly, an infant who's being raised in a bilingual environment will maintain their ability to discriminate those phonemes from any of the languages they hear, right?

AUDIENCE: So you're saying, with the monkeys thing, some kind of social cue to know that--

NANCY KANWISHER: I suspect that's true. I don't know this literature well enough. I do know-- yeah, actually, it's coming back dimly. Heather, do you know this? Janet Werker

AUDIENCE: [INAUDIBLE].

NANCY KANWISHER: OK, so Janet Werker is this amazing infant phoneme perception researcher. And I'm pretty sure that if you present infants with just like a TV in the background with a foreign language, even if the infant doesn't have much else to do, that's not enough. And you need to look at them, and engage with them, and speak mother-ese-- like, hey, infant, blah, blah, right? I think you need to do all of that for them to maintain it, but I'm--

AUDIENCE: Yeah, that's correct. I think there also has to be interaction. They can't also just be watching the [INAUDIBLE]. It has to be slightly [INAUDIBLE] reciprocal [INAUDIBLE].

AUDIENCE: And the fact that [INAUDIBLE].

NANCY KANWISHER: Correct, yeah.

AUDIENCE: So even if it's not just [INAUDIBLE], it has to be [INAUDIBLE].

NANCY KANWISHER: It has to be what?

AUDIENCE: It has to be like [INAUDIBLE]. It can't be [INAUDIBLE].

AUDIENCE: Yeah, which makes me of [INAUDIBLE] or something-- like if you interact in different ways, [INAUDIBLE].

NANCY KANWISHER: Cool, yeah?

AUDIENCE: Yeah, I have a question about how long that [INAUDIBLE] lasts. If someone spoke a foreign language when they were younger, then moved somewhere else or were adopted and then stopped speaking the language, [INAUDIBLE], could they sort of be [INAUDIBLE]?

NANCY KANWISHER: I don't know. I'm sure there's a literature on that. You don't know that, Dana, do you? Sorry, like so you're raised bilingual, and then you stop having the experience early on from your second language, and then you're re-exposed later at age eight?

AUDIENCE: [INAUDIBLE].

AUDIENCE: Yeah, you still have the-- yeah, you maintain the [INAUDIBLE].

AUDIENCE: Yeah, like after--

NANCY KANWISHER: Well, but wait--

AUDIENCE: But you're not able to speak the language, right?

AUDIENCE: Yeah.

AUDIENCE: But you still [INAUDIBLE].

AUDIENCE: But I guess you--

NANCY KANWISHER: But then, that's not consistent with perceptual narrowing.

AUDIENCE: If you're exposed to it before two years?

AUDIENCE: Yeah.

NANCY KANWISHER: Yeah.

AUDIENCE: And then you move away?

NANCY KANWISHER: Well, if it goes beyond that six-month thing, yeah, OK.

AUDIENCE: I think that's the case, yeah. You might not have the higher structure, but if you like you the syntax and some vocabulary, you'll have a better accent than someone who did not have that early experience, might not be able to differentiate [INAUDIBLE]. But--

AUDIENCE: You just [INAUDIBLE].

AUDIENCE: [INAUDIBLE], I think that's correct.

NANCY KANWISHER: OK, good. One more question. Josh?

AUDIENCE: So do we know of cases where there's [INAUDIBLE] a mismatch between [INAUDIBLE] sort of information? Like--

NANCY KANWISHER: Like this?

AUDIENCE: Yeah, like-- with this property in some of the domain of some of the [INAUDIBLE]. Basically be [INAUDIBLE].

NANCY KANWISHER: Oh, god, I don't have my dictionary of knowledge filed that way so I can pull up an instance of that, but I'm sure there are loads of those.

AUDIENCE: [INAUDIBLE].

NANCY KANWISHER: Yeah, well, because when we-- because we're making all these assumptions about which behavioral ability is subserved by some particular activation in the brain. And mostly, we don't know, right?

We know when we have the rare opportunities to do causal tests. We have a better idea that that system is at least causally involved in that behavioral ability. But yeah, often, those links are much looser than we'd like. All right, see you guys Wednesday.