Flash and JavaScript are required for this feature.
Download the video from Internet Archive.
Summary: Exploration of sensory restoration including brain-machine interface, restoring movement and touch and vision, including gene therapy.
Speaker: Michael Cohen

Lecture 12: Brain-Machine I...
[DIGITAL EFFECTS]
NANCY KANWISHER: All right, so we are very, very lucky today to have a guest lecture from Michael Cohen.
Michael got his PhD-- MICHAEL COHEN: Please keep this brief.
NANCY KANWISHER: It will be very brief.
MICHAEL COHEN: I don't like being introduced.
NANCY KANWISHER: OK, well, just suck it up, Michael.
MICHAEL COHEN: I get anxious.
NANCY KANWISHER: Suck it up, Michael.
MICHAEL COHEN: I'm working on it.
NANCY KANWISHER: OK.
[INTERPOSING VOICES]
Michael got his PhD from Harvard.
He then did a postdoc in my lab, and he is now a professor at Amherst College.
And he does really interesting work on perceptual awareness and working memory and their brain bases.
But he's also just an awesome lecturer, and when I looked at the slides for the course that he teaches at Amherst College, I thought, oh my god, I can't teach as well as Michael.
He's so amazing.
MICHAEL COHEN: Oh my god.
NANCY KANWISHER: At least I will get him to give one of his coolest lectures here.
So take it away.
MICHAEL COHEN: Cool, thank you.
I have to be super honest.
I'm always a little reluctant to give guest lectures, because it's kind of awkward, because you don't know me.
I don't know you.
You all know them.
You all are building a rapport with them.
And I feel like when people don't know the person at the front of the class, their first thing, their first instinct is to get quieter.
And so I literally always put on every side of every guest lecture, I'm begging you to interrupt me.
It makes life easier for me.
It lets me know what you want to know.
I get to have a sip of water I don't get as nervous and anxious.
So if you have any question whatsoever, I am pleading with you, you'll be doing me the favor if you interrupt me and ask me a question.
So generally, today, what we're going to be talking about is sensory restoration using both brain-machine interface and a little bit of gene therapy.
And there's a lot of work that's been done on all of these different senses, from taste, to smell, to vision, to audition, and so forth.
But the ones we're really going to focus on primarily today are going to be on motion, and sense, and touch-- so somatosensory cortex, motor cortex, and then vision and visual restoration.
And so we're going to start at the beginning, talking about restoring movement and touch, primarily with brain-machine interface, or brain-computer faces.
That's what BCI stands for.
And then we'll talk mostly about vision, which is going to be a combination of brain-computer interface and gene therapy.
If there's a little bit of time-- which I bet there won't be-- I'd like to talk a little bit about emotions and feelings.
If you did the reading that we assigned for this class, you read stuff about, like, curing depression, or dealing with Parkinson's, and where you can do these really extreme surgeries with chronic implants in your person to help treat these issues.
But let's start at the top.
Let's see, we'll get there-- movement and touch.
So with movement and touch, there's two big questions that people are trying to answer and make progress on.
The first is just asking, can we actually even restore motor movements in an artificial limb?
If you're paralyzed, if you're a quadriplegic, if, for whatever reason, maybe, you've even lost a limb, is it possible, with brain-machine interface, to actually give you mind control over an artificial limb?
And then, more recently, people tried to take it up a step further by actually saying, can we restore a sense of touch to those limbs?
So it's not going to just be that you can make an artificial hand reach out and grab something, but that once it actually touches whatever thing you're trying to grab, you yourself can feel it.
You can be like, oh, I just felt something hard on my middle finger.
I felt something soft on my ring finger.
OK, so before I go into that, I just want to do a really quick, brief recap reminder about motor cortex, just to make sure we're all unambiguously on the same page.
So this is your brain, obviously, and as you probably know, there are these two strips here right along the side of your head.
So on you, it's just literally taking your two fingers and going like this.
And the first one we're going to focus on is motor cortex.
OK, now I think basically everyone here took 900, is that right?
Is basically everyone here a course 9 major, pretty much?
Is that right?
No?
All right, cool, perfect.
Well, anyone-- doesn't matter what you're studying-- does anyone know what the dominant organizing principle of motor cortex is?
So if you learned later that auditory cortex has tonotopic maps or frequency maps, visual cortex has retinotopic maps, does anyone have any sense of what the map system is on-- yeah?
AUDIENCE: In most of the areas, are the largest [INAUDIBLE]??
MICHAEL COHEN: Yeah, that's in the ballpark of what I was looking for.
What I was mostly just looking for, actually, was something-- you were a little more advanced, even, than what I wanted, which was just something that you can almost map out your entire body on motor cortex.
That actually, there's this like 1:1 correspondence between all these different parts of your motor cortex and all these different parts of your body.
And you can actually-- scientists, and doctors, and physicians, they go through, and they do do this sort of mapping where they're like, oh, when you wiggle your toes, you see this happen.
When you move your hand, you see that happen.
And they've kind of figured this out in a bunch of different ways.
There's a bunch of different ways that you can go about mapping a person's motor cortex.
There's a bunch of different types of experiments you can do.
But the critical results generally just seem to be that you kind of notice that every time a person moves a particular part of their body, a corresponding part of their brain will light up.
And actually-- and I don't have time to go into this-- if you stimulate that person's motor cortex, maybe you've even seen these videos, either with TMS or direct electricity in the brain, you'll actually like ZAP someone's brain and like their hand will flinch.
I think Nancy-- yeah NANCY KANWISHER: Yeah.
So on Nancy's brain talks on my website, there's videos of me getting zapped in the motor cortex making me twitch-- [INTERPOSING VOICES]
MICHAEL COHEN: But just to get-- just to get a crude sense of what these maps can look like if you use something like fMRI, which I you all talk about a bunch in this course, is you can just see this like kind of correspondence.
So this is like we're looking at a person's head from the very top.
So it's like this part here corresponds to that here-- I think it's like upside down, like that.
And you can see that it's like, OK, this is their-- if they move their left fist, this little part lights up.
If they move their elbow, this part lights up.
And as you can see, there's actually these bilateral lips.
Does anyone have any sense of why are the lips symmetrical?
Why would the lips be symmetrical whereas like a foot in an elbow and a thumb is not?
Anywhere take a guess?
How come I can light up one part of my brain for an elbow, but I seem to always light up both sides for my lips?
Does anyone have a theory?
Think about what happens to your mouth when you're talking?
Here, put it another way-- try to talk, but only move the left side of your mouth.
It's like a little bit awkward, but like you can't really do it.
So you'll see actually that with your lips because they usually kind of rise and fall together, you won't get one or the other.
They usually kind of go hand in hand, whereas for other things that can be separated, I can move my left thumb, wiggle my right thumb, you'll see that that is actually can be separated.
So then in a very crude way, the way you can think about how you go from motor cortex to your muscles is you have a very-- I'm going to make it a very simple circuit.
It's obviously very much more complex.
But this is your entire body here.
This is your central and peripheral nervous system.
Your spine and the nerves going out into your limbs.
And you'll have some signal that will start in your motor cortex.
And then it will make its way down, and it'll go through your cerebellum.
It'll go through your spine.
And then eventually, it'll hit these muscles and cause the muscles to twitch.
And then that allows you to move an actual limb.
And this is just like a little GIF showing you the different ways in which the signals propagate down from the motor cortex to a limb and causes that to move.
And so basically the idea that engineers and researchers have had is, with a little system like this, can we replicate it?
But rather than having it move your actual biological limb, actually twitching your muscles and having your arm to move, can you do it with an artificial limb and a computer?
And this is basically the goal of this sort of restoration and this sort of brain machine interface in this particular case.
So in effect, what do you want to end up with is an artificial limb that you actually literally will move with your mind, that you will basically just think like, OK, hand, raise up.
And then there's a robotic arm that will raise up just like you want, you can have control over.
So the way that this works, there's a couple of like critical steps.
One is you've got to first get a computer that can be reading information in your brain.
It can be reading patterns of activity.
And you train that computer to decode a pattern of neural activity to learn when your brain is doing this, it means move the arm that way.
When your brain is doing that, it means move the arm that way, and so on and so forth.
And then what you want to do is you want to translate that neural activity that you've read out from this computer into an actual motor action with a limb.
If that didn't all make sense, let's break it down step by step.
So this first idea here, you're going to train a computer to decode a pattern of neural activity.
I think you all have talked a little bit about decoding?
NANCY KANWISHER: Yes.
MICHAEL COHEN: So just like a really crude thing, just to try to get everyone on the same page.
If you don't get this part-- ooh, wait, this is important.
If you don't get this part, definitely stop me, even raise your hand and say something like, can you go over that again?
Because if you don't get this part the next few steps look to be a little hazy for you.
But imagine we take it down to the scanner, it's on the other side of the building and we look in your brain while you're looking at images of random things.
So I show you an image of a shoe.
And what we can do is we can scan your brain and we can go into your visual system in this case.
And we can actually look at a pattern of activity.
In this case, there's these little things called voxels.
And we can train a computer to recognize like, hey, when you see this pattern, this pattern means shoe.
This is what Michael's brain looks like when he's looking at a picture of a shoe.
Then we show me now not a picture of a shoe, we showed me a picture of a cat, and we see a slightly different pattern, and we tell the computer, hey, this here, this isn't a pattern of a shoe anymore.
This is what a cat looks like.
So the computer is over there doing mathy things and being like, OK, that's a shoe, that's a cat, that's a house, that's a face, et cetera, et cetera.
And then what you can do is actually you can test it.
And you can say, hey, computer what is this here?
And if you had to guess, does this pattern here look more like the shoe pattern or more like the cat pattern?
It looks exactly like the shoe-- or more or less like the shoe pattern.
Congratulations, you all just did like decoding.
A very crude way, but you just basically did it.
Here are some actual data.
So this is the response pattern you get in someone's ventral pathway when they're looking at an image of a face.
This is the same person when they're looking at a picture of a house.
So look at that image, look at that image.
Now what are they looking at here?
Are they looking at a face or are they looking at a house?
Yeah, they're looking at a face.
And you can kind of just intuit it.
You're like, OK, there's more blue between these two than there is between those two.
There's this blue-red little pattern there, those red things on top of the blue thing here, red things on top of the blue thing there, OK, that's a face.
We'll do one more.
This is what a shoe looks like.
This is what a chair looks like.
What's this one, a shoe or a chair?
This one's a little trickier, but, yeah.
It kind of looks more and more like a shoe.
This is what I mean by teaching a computer to recognize and decode activity.
Except rather than doing it with shoes, and chairs, and cats, and whatever, you're going to do it with move the arm to the left, move the arm to the right, move it down, up, left, inside, outside, so and so and so and so.
And actually what sometimes people do-- and this is going to be something we're going to need to do with these limbs-- is actually kind of like turn that into a novel thing that you haven't necessarily done before.
So for example, in the vision domain, what you can actually do is tell a computer, hey, don't just label these, don't just give me like a word like shoe, or car, or chair, or face, try to actually reconstruct it, like try to turn this into something novel and something new.
And so what you can do is you can show a computer example after example of this brain pattern corresponds to this image.
This brain pattern corresponds to that image.
And then when you show it new things that it's never seen before, you can actually have the algorithm try to reconstruct what it thinks that you were seeing.
So this is an algorithm's attempt to draw for us-- I don't know why I did it in quotation marks, it is drawing-- to draw for us what it thinks that the person was seeing.
It doesn't know what the person was seeing.
This is literally like the computer being like, well, based on your brain, I think it was going to be this, I think it was going to be that.
You could sometimes be cute and spell out words neuron based on reconstructed activity.
You can get fancier, spell out words like brains.
You can sometimes even do it with houses-- or faces, sorry.
So this is an attempt to redraw the face that a person saw.
And these are all examples of this sort of-- you decode a person's brain and then you kind of like create some sort of output that serves as the foundation of this sort of brain-computer interface.
Except what I was telling you about is you see a picture, you measure the brain activity, and then you want to reconstruct an image.
What we're going to do in this case because of the motor system is you're going to think of a movement.
And from that movement, you're going to again measure brain activity.
And the output here isn't going to be reconstructing an image.
It's going to be moving your artificial limb.
So it's just literally this pattern of activity corresponds to this motion.
Let's try to execute that as best we can.
And we're going to just literally try to replicate it in a simple system going from brain pattern to motion of a limb.
So the way that this works-- this gets a little intense now-- is what you will actually do is you will directly implant a series of electrodes in the motor cortex of the person or the animal that you're interacting with.
So you'll literally have this sort of chronically implanted device that'll be on their head that literally interfaces directly with the cortical tissue.
So to give you a sense of what it looks like, it can be these tiny little things here that have-- I don't know-- 120, 200 little needles-- I don't know how there are.
But several dozen of these little things measuring brain activity, very tiny, this is smaller than a dime.
And what you'll literally have to do is go right into the cortex.
You're literally like kind of squish it right into the brain.
So it's directly measuring neural activity.
This here is an actual patient.
He's a quadriplegic, where actually you can see he has a respirator implanted so you know his limbs don't move in any way.
And this is what it looks like when he has this device kind of on his head, where it's actually-- the electrodes are already in his motor cortex.
So here's a little cartoon kind of explaining the step-by-step procedure.
So you have those implanted electrode arrays in whatever part of the brain you're interested-- in this case, motor cortex.
And then what you will do is you'll basically transmit the measurements of neurons firing that are in the brain, you transmit that signal-- you can do it wirelessly, you can do it with a cable, it doesn't really matter.
And what you do is you have your computer that's learning, oh, how to interpret these firing patterns.
In the same way that a couple of minutes ago we were doing the things where you all learned what a face pattern in a house pattern looks like, you have some computer that's learning that, oh, this pattern of neural activity corresponds with this sort of trajectory.
That pattern corresponds with that trajectory.
And it computes the trajectory that it thinks that it's measuring.
And then you literally just have an arm actually move.
And then this becomes sort of like a closed-loop circuit where as the arm is moving successfully, the person watching the arm's like, wait, no, no.
Go a little bit more to the left.
Go a little bit more to the right.
Now go down, pick up the bottle, there we go.
So it's actually a real-time sort of feedback circle.
So this stuff got initially done with primates.
So I'm going to show you an image here-- or a video here-- of what happens.
So this is our participant.
Here you can see that he's physically constrained so he's not really moving.
He can't really reach out and grab this thing.
The electrode arrays are implanted into his brain up here.
These two little things are juice rewards.
And every time he can take his arm-- which you're going to see in a second, his robot arm-- and he could reach out and grab the orange square that's this thing right here, he'll get a little juice reward.
So with the first one that he does, you'll actually notice when the hand kind of grabs the little square, if you watch him, he like gets excited and drinks the little bit of juice.
And then this is what this looks like.
So then they're going to move it again so we can see it.
Hold it up-- he tries to get his arm.
Notice how he's watching it, trying to do it-- kind of misses a little bit, doesn't really get the juice reward yet for this one.
He gets in the ballpark, but he doesn't get the juice reward unless he actually grabs it.
But with a little bit of training and a little bit of practice, they can get really good at this.
So this is a different monkey and now it's going to be getting marshmallows.
I don't know if this is true, but a friend of mine who works with these monkeys told me that like monkeys go gaga over marshmallows.
Like if you want to motivate them-- like if I want to motivate you, I'd give you money-- wait, you're MIT students, I wouldn't give you money, I'd give you better grades.
But with them, you give them marshmallows and they'll do whatever you want.
And so now he's got these little pinchers.
So it's not like an actual hand with five digits, it's like a pincher with two.
And what he's going to do is he's going to now try to actually feed himself.
So it's not going to just be grab it, it's going to be grab the marshmallow off the thing, bring it to your mouth and eat it.
Every now and then, you might see some flailing back here, this like black thing, that's actually the chord that's going into his motor cortex.
So you can actually see the interface that's going right into his brain.
So we'll just watch a few trials of this.
So it's a little slow and it's a little labored, but like, by and large, this is pretty good.
That's that little-- the thing set back here, that's it moving.
Watches it pretty intently, sometimes he struggles with grabbing the marshmallow, but like he's in the ballpark, and then, yay, he gets a little marshmallow.
OK.
So this is basically sort of the final end result type of thing that you can get with these animals, is again, you're reading the pattern of activity in the motor cortex.
A computer reads that activity and translated into motion in real time.
But of course, what you ultimately want to do is do this for humans, do it with people who have lost the limbs, do it with people who have some sort of disease or something.
I can't remember what's wrong with this woman exactly, but she's lost the ability to use her hands very well.
So what they're going to do is they're going to do the exact same system-- you can already see on the top of her head, there's this little thing.
That's the interface in her motor cortex.
And she's going to take this really large gnarly arm that has like four elbows or something.
They're going to raise these little purple balls and she's just going to try to reach out and grab them as best as she can with her motor-- oh gosh.
[VIDEO PLAYBACK]
- We're able to control a prosthetic or a robotic arm simply by thinking about the movement of their own paralyzed hand.
And they did that using the investigational BrainGate neural mapping system.
So they thought about using their own arm and hand as if they were reaching out themselves with their own limb.
And the robotic arm moved much the way their own arm would.
[END PLAYBACK]
MICHAEL COHEN: If you watch this entire special-- it was on like 60 minutes, or CBS, or whatever blah, blah, blah, there's this one little part that I think is really cute where they were talking about, yeah, we were sitting there one time, and Jeannie-- and I don't know if that's her name, I'm making that up-- where we were sitting there, and we were like, oh blah, blah, blah, we need to get the camera to do this little thing.
And Jeannie was just sitting there bored.
And then all of a sudden, we heard this "ah," and they like looked over, they thought she had fallen off the chair.
But what had actually happened is she had picked up the water bottle and was giving it to herself.
And she was just like, I haven't done this in like 30 years.
Oh my God, like I can actually-- I have the independence now to grab the water bottle.
I didn't need someone to do it for me.
I could just do it.
And I just-- I was so giddy.
So I think that's really charming, only a few people are smiling.
Maybe it's not as charming as I think.
But this is the type of thing that you want to have these sorts of devices for, is to give people who have lost these abilities a little bit of autonomy, a little bit of ability back.
AUDIENCE: So the person can practice, right?
So how much of it is the person learning versus the learning algorithm?
MICHAEL COHEN: It's a little bit of both, actually.
Like you actually-- it kind of goes both ways because sometimes you actually learn like little tricks.
If you listen to the people talk about it, they're like, oh, I've actually noticed if I like overexaggerate certain thoughts that that works a little bit better.
Like even though I want to go to the left, I've actually noticed if I think a little bit more, the system seems to work better.
So it's not the type of thing where it's cut and dry.
Because if I asked you like, hey, like don't move, but imagine having your arm go all the way out to the left.
Now do it again-- like those patterns might not be exactly the same.
So it's like it takes a little bit of practice and refinement on your part, but there's definitely also like a learning process for the algorithm itself to know what those patterns relate to.
Sometimes in systems, you'll actually see people when they practice-- like say you and I were trying to do it and we're not the patient.
Like if we're not using one as invasive as this, you'll actually see the person be like, OK, machine, that means left.
And they'll actually even do the actions to try to have like cleaner signal, which raises the question of like, wait a minute.
How could we make it so that this system is less invasive?
So I think you all have talked about a bunch of different method-- NANCY KANWISHER: --chime in for one second.
MICHAEL COHEN: Yeah.
NANCY KANWISHER: With a large largely irrelevant thing.
But just to get-- you may be thinking, OK, people will get paralyzed.
Like icky old people, like not relevant to me, whatever, strokes, like other-- not relevant.
Just as a little kind of a piece of information, when I was 15 years old, I went to the beach with some friends, jumped off a friend's shoulders, hit the bottom, broke my neck and was quadriplegic.
Now I was lucky, because within a few months, it was spinal shock rather than actually severing the spinal cord.
Otherwise, I wouldn't be walking around here today.
But it turns out, this is actually the most common way that people become paralyzed.
It's surprisingly common.
And it's not that hard to do.
So, one, don't dive in shallow water.
It's really easy to become quadriplegic.
And two, this is something that really matters to people's lives.
OK, sorry.
MICHAEL COHEN: I love how-- I love the like ease with which she's just like, oh yeah.
I broke my neck and I was quadriplegic.
And she's just like yeah.
Like for me, the first time she told me that, I was like, what the crap is happening?
But she's like, oh yeah.
I have the scars to prove it.
She probably does-- do you have scars on your neck?
NANCY KANWISHER: I do.
MICHAEL COHEN: Oh my.
All right.
We'll say that we don't want to have, as Nancy put it, icky old people situation where we have to actually interface directly with the brain by doing neurosurgery.
What is a less invasive method of neuroimaging that you guys have learned about?
So you've learned about fMRI, you've learned about single-unit recordings.
But what's like a super-- I'm going to give you a hint-- a super noninvasive method of measuring brain activity?
Anyone have any idea.
Yeah?
AUDIENCE: You could put a bunch of electrodes onto the [INAUDIBLE].
MICHAEL COHEN: Exactly.
You could measure people's brain activity not by having to interface directly with the brain, but just by interfacing with the scalp.
And so this is actually-- there's not as much developed here, it's newer.
But there's actually a lot of people who are working right now actively on developing brain-machine interfaces for individuals who don't have these sort of medical needs using electrodes on the scalp.
And so in this situation here-- I'm not making this up-- this is a system that's designed to let you play a video game with just your mind.
And so there are these three little arrows here.
And actually all this system is designed is to have the character either walk to the left, walk to the right, or walk straight.
There actually are commercial products that you can already buy that look kind of like this.
They're kind of really pretty janky, crappy to be totally honest, kind of little systems that work on your brain like-- that go on your brain like this.
And they can do simple things like turn the lights on, turn the lights off, turn the TV on, change the channel.
There are some videos-- you can go on YouTube if you're interested-- of people like navigating in a wheelchair with a system like this.
Because it actually turns out that if it's just like left, right, forward, backward, even though this system is extremely crude and doesn't have much precision, it can learn to do something as simple as that.
So you actually have a lot of different applications of people trying to do this in real time.
There was a time once when I was at Harvard when I was in grad school, and I remember seeing a couple of people outside with a little like-- I think it was like a robot rhinoceros?
I was about to say unicorn.
I think it was a little robot rhino.
And they were trying to get it to like go left or right and it kept falling over, but it's a thing.
You can apparently work on it.
But as Nancy was talking about, there's a ton of different types of situations where you could think about how this could be applied in medical cases.
Not just for people who are paralyzed, my personal favorite-- because as Nancy said, I like to study perceptual awareness-- is for people who are locked in.
So this is actually a patient who has locked-in syndrome.
So it's been determined that she is conscious, that she is alive, but her body is basically incapable of moving.
She doesn't actually even have the ability to blink voluntarily.
She has to be hooked up to a respirator so that she can even breathe.
And actually here, what she has been outfitted with is an EEG system that is crude enough that she can at least move a cursor to the left or the right to answer pretty basic yes/no questions.
So even though she has no demonstrable ability to show you her awareness or the fact that she's conscious, you can actually-- I'm going to make this up-- be like, hey, is your name Samantha?
And the cursor will make its way over to the left and say yes.
If you're like, is your name Charlotte, it'll make its way over to the cursor and say no.
And that it's kind of a basic way of that she can communicate.
Less intense, you can use these sort of electrodes not just to play video games or just answer yes/no questions, but like I said, to move these wheelchair type systems, and so forth.
So-- oh, my computer is having a hiccup.
It's also possible, actually, that you can use these systems not to do it directly by reading brain activity explicitly, but by doing it by other materials.
And specifically, by doing it with your muscles.
So let's imagine that you are a soldier-- I'm going to show you a soldier in a second-- and you go off to war.
You're in an unfortunate accident and you lose a limb.
It turns out that there's a surgery that they'll sometimes do where they'll take the neurons that normally would go from your brain down to your arm and they'll rewire them.
So this part's a little tricky, so pay attention here.
Usually you've got a-- a bunch of wires basically go from the motor cortex, go all the way down to my arm.
But I've lost my arm, OK?
Let's pretend I've lost it from my elbow down.
So what you could do actually is you can take all these wires that are still here in my bicep, and my tricep, and so forth, and you can reroute them.
So now you're going to reroute them into my pec muscle.
And that's what this little circle over here supposed to be.
So now when you say, hey, Michael, do this with your hand, that signal that will come down my arm, it's not going to go into the void of nothingness because I'm missing a limb.
That signal's been rerouted to my pec.
And so now there will be like a-- I'm trying to pretend like I have muscles, which I don't.
It'll be this little like twitch there that'll be like my pec.
And then if you're like, hey, Michael, now open your hand, it'll be like-- I have no idea how to open this part of my-- pretend I'm twitching part of my pec a little bit.
So now you actually have this situation where as a person who's lost a limb, every time I think of a movement with that limb, the signal is going to get rerouted to different muscles on my body.
And so now what you can do is you can actually hook up an artificial limb to those now reinnervated muscles and move the limb accordingly.
So let me show you a video of what this looks like.
[VIDEO PLAYBACK]
- Tell me if you feel any kind of sensation, first of all, anywhere in your phantom limbs.
Like if you feel it like on the side of your arm here, or elbow, or pinky, like tingling, any sensation at all.
- Tingling.
- Underneath this area, you're getting-- - Yeah.
There's something that's going on there.
- What the procedure of targeted muscle reinnervation does is take those free nerve endings that formerly controlled the hand, wrist, and elbow, and shoulder, and moves them to remaining muscle groups that are still there.
When your brain thinks open the hand, it then fires the end point of that nerve and contracts that muscle.
And we pick that up and can map it to the missing limb.
- Oh that's right down the bird finger.
- You're flipping me the bird right now?
- I think so.
[END PLAYBACK]
MICHAEL COHEN: OK.
So a couple real things before we keep going.
Those little black dots she was drawing on him, that was them mapping, OK, when you have a thought, or when I press this part, you feel it in your phantom limb.
You feel it in the limb that's no longer there.
Just then when he was like, I'm flipping you the bird, what he was saying was, oh when, you touched me right there, it feels like I can feel a sensation on my middle finger.
And so that's because, like she was saying, there's been this reinnervation from this procedure that's moved it so that there's now this link between all these muscles on his shoulder-- or I guess not really his shoulder, his pec, and his back, and so forth that go to the phantom limb.
[VIDEO PLAYBACK]
- So that was the magic point.
- I found the magic spot.
We should put a big star on that one.
- Bingo.
We can go home now.
- Yeah.
Mission accomplished.
[END PLAYBACK]
MICHAEL COHEN: And now what they can do is they can outfit him with these artificial limbs that have little sensors that are picking up on that muscle activity.
And now this is him moving them.
[VIDEO PLAYBACK]
- The arm control Close.
So I just think open.
And now the shoulder out.
The shoulder back.
- Shoulder extend.
Ready and go.
Shoulder flex, ready and go.
[END PLAYBACK]
MICHAEL COHEN: I'm going to show you one more just because I like this-- oh yeah.
NANCY KANWISHER: Does that napping need to be accurate?
Like do they need to try to map-- match the corresponding muscles and-- how do I express this?
Does a muscle-- where the nerve comes in, does that need to match where that nerve would have gone originally?
Or can they relearn a totally random?
MICHAEL COHEN: They can relearn a pretty random one.
I think there's definitely like constraints as to like things that are on this surface of your arm, you'll have get reinnervated up here.
Whereas things on this surface, you'll have reinnervate back there.
But actually-- this is what I hear, anecdotes from the patients talking.
They're like, you don't really have to learn a new thing, you just think like, do this with your thumb.
And then all of the sudden, your chest starts twitching.
Or like, do this with your pinky and your back starts twitching.
And they're like, it's actually relatively seamless.
And they even claim like it gets to a point once you have these limbs on that you don't even really think about the muscles, you just think about the limb and it just sort of becomes almost second nature.
This is another one here-- AUDIENCE: I had a question.
MICHAEL COHEN: Oh yeah, sorry.
AUDIENCE: So I'm just trying to understand why this-- fundamentally why we need to remap the whole thing?
So is it more from an engineering sort of standpoint saying, if there is sort of sensitivity to those nerves at these stop, you want to sort of reposition them to some of the muscle region which can-- engineering?
MICHAEL COHEN: Yeah, it's almost-- it actually is in a lot of ways in engineering thing because imagine all those muscles basically come from this part of the forearm.
So let's imagine I lose my hand.
And it's like this.
If you put these sensors here, that's really-- it's not a huge patch of muscles.
So just spread the signal out so that then you have a system that's like, oh, if there's something over here, it means thumb.
If there's something way over there on the other side of the body, it means pinky.
So actually by reinnervating it in that way, it actually-- it just makes it a little bit more easy for the system to get-- AUDIENCE: Is that the real reason?
MICHAEL COHEN: That's the primary reason, yeah.
If you had better circuitry, better systems-- yeah, you could imagine just being like, oh, we can just read out the little bit that's left of the forearm.
Did I see-- oh yeah?
AUDIENCE: I guess I'm wondering like what the system does in response to a lot of noise or like if the person's not clearly thinking something?
And also like kind of related, if a person thinks about doing it actually it's not actually possible for like a for like the robot arm maybe to do?
MICHAEL COHEN: Yeah, it'll-- Oh.
AUDIENCE: Like limitation?
MICHAEL COHEN: Yeah, yeah, yeah.
Oh, I thought what you were going to say is like, can you take the arm and basically have it like go like wrap around and like something like that.
AUDIENCE: I mean, like I imagined that what the robot arm is capable of is slightly different from what an actual arm is capable of.
So I'm just saying in terms of what the robot arm could do.
MICHAEL COHEN: Like is an example like you try to lift up something that the robot arm can't handle?
AUDIENCE: Yeah, maybe.
Or like-- MICHAEL COHEN: The robot arm will break if you try to rotate it too many times.
So like you all are too young know The Exorcist, but let's pretend you try to like rotate it on an axis, eventually, like anything else, like it would get tangled up and it will break.
They probably are a little failsafe systems in there so that it like we'll stop and like sometimes if you try to turn something and it can't turn anymore, it kind of like has a little anxiety attack like that.
And then it'll go back.
So yeah, it's a fallible system.
It's not like the strength of Iron Man wrapped up in-- you know what I mean?
So the types of engineering things that you would think happen if you try to push a device too far are the same types of things that will happen in these cases.
AUDIENCE: But you could, in principle, engineer it to be much stronger than the original person's arm?
MICHAEL COHEN: You could.
People are.
I know that DARPA does this right now, right?
Exactly.
Actually, this guy touches this about-- he talks about weightlifting in a second.
Yeah?
AUDIENCE: So I'm not sure if I completely understand the point of like spreading out those neurons, but like could it happen that you spread them out and then let's say you're trying to move a certain part of your body that has nothing to do with your limb that's missing and your robot arm-- MICHAEL COHEN: Yeah.
So like let's say it's hooked up to your pec and you naturally do a pectoral movement.
Like maybe your arm will start sporadically moving because you're, like, whoa, all I wanted to do is like move my shoulder.
Is that kind of what you're asking?
Yeah.
That can sometimes happen.
And this is actually related to it, that they'll try to move it actually to muscles for which that is less likely.
So what you're not going to probably do with a guy like that you saw with no arms is have him reinnervate to his abs because he could like just be doing like any sort of movement.
But like if I were to say all of you sit here right now and twitch these muscles like back here in your back, it's like kind of hard to do.
And especially if you've lost your limb, you're even less likely to try to use them.
So you do try to target muscles that are not only spread out, but that you can decrease the probability that that's going to happen.
But yeah, it can definitely happen.
You do sometimes see videos where you will get like, oops, sorry.
I didn't mean to move it to the left.
I was just trying to turn around.
So it's still a pretty flawed system in a lot of ways.
This guy I like for a couple of reasons.
One is because you can actually see, this is about as state of the art as it gets in terms of how smooth these motions are.
So in this particular case here, they've done actually the reinnervation onto the muscles of his like biceps and triceps in here.
So it's not going into his chest.
And you can actually see the little sensors that are reading it out.
And then what they've done is they've taken the artificial limb and actually connected it to his bone.
So this thing is actually like chronically hooked up to him.
The main reason that I like them is this is a personal neurosis, I am from the great state of Texas.
I have a soft spot for Southerners.
And if you listen to this guy, he got a big old accent.
[VIDEO PLAYBACK]
- [INAUDIBLE] here curl 45 pound dumbbell.
MICHAEL COHEN: Wait, hold on.
Wait for it.
- All day long, it doesn't matter [INAUDIBLE]..
Saw a soldier like him, with them big old guns.
MICHAEL COHEN: Them big old guns.
So-- but in all honesty, if you watch this guy, you'll see the way he can rotate it, the way he can move it.
It's pretty smooth.
It's pretty natural.
I can't remember how many-- - My arm is cut off here.
The nerve endings that would have went down here to work with the hand, they've been taken and re-inverted into different muscles into the stump.
When you think about something, your brain when it sends it down the nerve, it's a minute electrical charge.
It comes down wherever the nerve endings ends.
And then the the MyoBands here will pick up, they jiggle.
With where it's at, the MyoBand's sending a signal down to the computer in the arm.
And by doing that, that let's the arm know this is exactly what I wanted to do.
[END PLAYBACK]
MICHAEL COHEN: All right.
So this is kind of about right now where we are with that last video there.
This is about where the state of the art is in terms of these artificial limbs.
But one thing we're going to go into in a second is now trying to add another level of complexity to them, not so that it's just moving them, but you can actually restore a sense of touch in them.
Before I go on to talking about adding touch to those limbs, does anyone have any last minute questions about anything?
Yeah?
AUDIENCE: Can the limbs be moving while he's like dreaming?
Like while he's sleeping?
MICHAEL COHEN: I think they take-- they turn them off at night for that very reason.
And then like nightmare scenario, you can be like, I swear to God, Mr. Police Officer Man, I didn't choke my wife, my artificial limb did while I was asleep.
So because like you could have situations like that, yeah, they won't like wear it at night when they're asleep or anything like that.
Yeah?
AUDIENCE: What's the current latency between like thought and movement?
MICHAEL COHEN: It varies quite a bit from system to system.
But I mean, it's a little lethargic.
It would almost be like-- the best analogy to give you an intuition, imagine I just put you under general anesthesia and you woke up.
And I was like, hey, we're going to-- what's your name?
AUDIENCE: Jimmy.
MICHAEL COHEN: OK, Jimmy, let's walk across the room.
And pretend I'm you, Jimmy.
And you're like walk across the room?
Oh my God.
Like all right, let's go.
Like you're kind of-- it's a little lethargic.
And it's like wait, Jimmy, Jimmy, wrong door.
You're going to be like, oh, sorry.
I don't know why I'm leaning back like that, but I-- it's like you can see that that's what I'm doing, but it's not like the hop, skip, and a jump immediately.
And the time delay there is mostly a function of how much time you need to average over the signal that you want to process, how much time it takes to do the computations.
But it's not that great, but it's really not that bad.
Like it's good enough that that guy was able to reach out and grab the dumbbell and do it.
So a couple of seconds-- just a little drag.
Yeah?
AUDIENCE: Do you know how the motor cortex would change in response to having one of these artificial limbs to just having this [AUDIO OUT]?
MICHAEL COHEN: That's actually a really good question.
As far as I know-- I don't know actually any studies that have tried to look at like that sort of remapping in terms of what happens.
You actually-- my thought-- Nancy, tell me if you disagree with this.
My thought would be that if you got the system really well, really precise, that your motor cortex in an ideal world shouldn't change that much because you're still having some relationship between control and thought.
NANCY KANWISHER: [INAUDIBLE] if you need to maintain that relationship.
But assuming you're building on the prewired relationship between a cortical region with a particular set of muscles.
You're capitalizing on that patterning that's already there.
It shouldn't change that much.
MICHAEL COHEN: It shouldn't change that.
I bet that it will change-- NANCY KANWISHER: Cross it all, lift it up.
MICHAEL COHEN: Yeah, so here-- NANCY KANWISHER: So then learn to function anyway, but it'd take a while.
And then you'd have motor cortex [INAUDIBLE]
MICHAEL COHEN: Like a week ago, I saw that movie that was not good, which I don't recommend, Aquaman.
You could imagine that if instead of giving me two robotic arms, you gave me two big robotic fins so that I could be like a soldier of the deep.
Like you would imagine in a situation like then, that it's like, oh he lost all the-- he's never doing anything with his fingers, with his elbows, that maybe my motor cortex might remap itself to become almost like, oh, that's the fin strip of Michael.
Does that makes sense?
Fantastical thinking, Heren, at MIT.
OK.
So let's talk a little bit-- and this part I really like-- about, how can you use the same sort of approach and system now to restore to restore a sense of touch?
So how might this work in principle?
So again, just a really quick brief reminder about somatosensory cortex-- so we were just a second ago looking at primary motor cortex, somatosensory cortex is right there nearby.
The organizing principle there is very similar to motor cortex.
So this is the map of your motor system.
This is the map of your somatosensory system.
So Now in this case, whenever you say, now touch your face, this part of your somatosensory strip lights up.
If ever you touch your forearm, this part lights up here.
And as one thing you'll notice, these maps are not identical.
There's some parts of your body that are on this map, like your guts and your head, that are not on this map.
Why might that be?
Why does your somatosensory cortex represent parts of your body that your motor cortex does?
Does anyone have an idea why that might be?
Yeah?
AUDIENCE: You consciously move your gut, you could feel it.
MICHAEL COHEN: Yeah, that's exactly right.
Like I can say, move your eye, or like move your eye and touch your eye, you'll feel your eye if you touch it.
Your eye will move if you move it.
And you can feel your gut if I-- I don't know how-- feel your gut, you'll feel it.
But if I'm like move your intestines, you can't really like wiggle your or-- like the top of your head, you can feel the top of your head.
But if you try to move the head move the top of your head, you'd just be like [GRUNTS] and like it doesn't really work.
Real quick aside, if you're ever around babies, one thing you'll notice is that babies always when they pick up stuff, they put it in their mouth.
Why do they put it in their mouth?
The reason they put it in the mouth-- this was the comment I think she made earlier-- was because a huge portion of your cortex actually in somatosensory system gets devoted to your lips.
So there's a lot of sensitivity to your lips.
And if you actually build a little model that's to represent how much cortex goes to the different parts of your person, you'll actually see that the hands, and the lips, and the tongue are really oversized.
So if you're a baby, think about it.
You see this thing, you're like what the crap is this?
Dana, sorry, I'm putting your clicker in my mouth-- you're like, ooh, I'm going to explore this, because you have this sensitivity there.
I find this delightful.
You can make these for also a bunch of different types of animals.
This is a real quick aside.
It's like this is what a human somatosensory homunculus looks.
Like this is a monkey's.
I don't really know why, but apparently, the under jaw of the rabbit is very pronounced.
If you do this with mice and rats does-- anyone have a guess what are those colored dots?
The whiskers.
The whiskers are super sensitive, but they're nothing compared to the little sensors of the star-nosed mole, which is this ugly little guy who doesn't really see underground, but when you look at the relationship between his actual body and his somatosensory system, that little thing is like disgusting nightmare squid thing coming out to you.
OK.
It actually turns out, though, that these sort of maps and the specificity of different parts of your bodies are kind of what enables you to have this sort of ability.
Because if you take a person and you put them in an fMRI machine-- we go down to the scanner again.
And I start touching your different fingers-- these digits-- and we start color-coding them, you'll actually find that on your somatosensory cortex, you'll see slightly different parts of the cortex associated with every individual finger.
So I tell you to move your thumb, you see this part down here lights up.
I tell you to move your pinky, which is in purple, you see that part over there lights up.
So you really get this precision there.
And actually, if you look at the somatosensory strip of something like a monkey and really go into it, a lot of fine-grained measurements, not something like fMRI, you can actually see really fine-scale differentiations between the different digits.
These little bands here correspond to like the different knuckles.
So you can really get fine-grained in this system here.
And so then scientists had this idea of like, well, maybe what we can do is actually interact a computer with this system here to give you your sense of touch back.
So a little cartoon to kind of show you what we've done so far.
What we've done so far is take patterns of activity in your motor cortex, read that activity, compute that activity into motion trajectory, and move your robot arm.
The next step that you want to do is now outfit that robot arm with special sensors-- pressure sensors, heat sensors, whatever-- and then take those measurements from those sensors and move it back into your brain.
So now you'll have not just the one system that's measuring activity in your motor cortex, you'll have another system that's now delivering activity into your somatosensory cortex.
So what you'll just literally do is you'll take this little grid of electrodes here and you'll put another one, or another several, nearby in your somatosensory system to give you the exact sort of loop that we were talking about before.
So here's a little video to give you a sense of how this-- [VIDEO PLAYBACK]
- I lost my hand approximately three years ago in an industrial accident.
- Igor Spetic of Madison, Ohio has benefited from the mechanical motion of a prosthetic limb.
But he hasn't been able to feel things in his grasp.
- I have to visually look at whatever I'm picking up, watch that I don't over squeeze.
- Over the last-- [END PLAYBACK]
MICHAEL COHEN: I just love for the record that 60 minutes wanted to put the word fail-- like this poor guy lost his arm.
He's having this like extensive surgery, and they're like you can't even squeeze a cherry, Igor, get your life together.
But this is actually kind of a nice little representation of why this system can be useful because you could imagine that if you're moving your robot arm to pick up the glass, if you squeeze it too tight, maybe you will have too much strength in there and the glass will just shatter.
And a really dark case, say you go to pick up your little pet rabbit, and you're like, I want to squeeze the rabbit, maybe things can get a little dark.
That's right, that feeling of dread that is encompassing you, that's one of the many benefits of adding the sense of touch.
A little bit more seriously, I cannot find this video even though it's one of the best videos I've ever seen.
[INAUDIBLE] actually, Nancy, did this one.
If you put anesthesia agents in your hands so that they're numb, so that you can still move them but you can't feel them at all and you try to do anything, it is laughable at how little you can do.
Like if you try to tie your shoes when you can't like-- it's just like-- it's just a nightmare.
It's just complete chaos.
You're just like what is happening I, can't do this.
You might think that your touch isn't that important, but even basic things like buttoning your shirt, tying your shoes, when you lose that sense of touch, you lose a lot of these capacities.
So actually, one thought that people have is, oh, if we can give these artificial limbs a sense of touch, maybe it will increase the dexterity of them.
Maybe it will give people a lot more skill and abilities with them.
So this isn't just like a fun like, hey, wouldn't it be nice if you can like touch a bunny or whatever.
It's actually a thing that's like this might be a really useful thing for these types of people.
So here, this is an example of this system in action.
This is a specially outfitted artificial hand that has these sensors in it.
So there's a guy-- this guy who's sitting back here, we're going to see him in a second-- he is going to be blindfolded so that he can't see what he's looking at-- or he can't see-- can't see what he's looking at-- he can't see what they're presenting to him.
And he's going to squeeze it and he's just going to say if it's soft, if it's hard, if it's medium, or something like that.
And this is entirely from an artificial system.
[VIDEO PLAYBACK]
- Soft.
Hard.
Medium.
[END PLAYBACK]
MICHAEL COHEN: And with a little skill, and a little time, and a little focus, you can actually get it even more precise to where you'll get sensors on individual parts of the hand-- just one second-- and now they'll rub sandpaper, Velcro, on different parts of your hand, different digits, and they can actually tell you like where you're doing it and what you're doing.
[VIDEO PLAYBACK]
- It's provided Igor with a sense of touch.
And most recently, the ability to distinguish between textures.
- Ridges on the middle finger.
- In 19 distinct locations in his hand.
- The palm.
And sandpaper on the middle figure.
[END PLAYBACK]
I like this shot here because actually it's not even hooked up to his arm.
They don't even put it in the sense-- like attach it to here.
You can actually see this is where the stump of his hand is.
And then this is the artificial system.
You could put the artificial limb on the other side of campus if you wanted because it's not going to be coming from something that's necessarily near your person, it's going to be coming from this different thing here that's got all these sensors attached to it.
Do you have question?
AUDIENCE: So this is not invasive?
I thought this was invasive.
MICHAEL COHEN: No, this is invasive.
AUDIENCE: Oh.
I didn't see anything hooked up to [AUDIO OUT]
MICHAEL COHEN: Because actually-- and I just for the sake of time, what they now can do is actually a lot of the stuff wirelessly.
And so like you just couldn't see it, but they can now do a lot of these systems so you don't have to have the wires coming out of them as the technology is developed.
Is there another hand over there somewhere?
AUDIENCE: I was just going to ask if he said hard or hot.
MICHAEL COHEN: Hard.
But you actually-- supposedly, they put temperature sensors on these, but I don't know how good those are OK.
So yeah, we got about 25 minutes left.
So are there any last minute questions on movement or touch on any variety before I move on to visual stuff?
Going once, twice-- all right, cool.
Yeah, OK.
So we just talked about movement and touch.
Now let's move on to talk about vision for a little bit.
Again, my computer-- sorry-- all right.
So let's talk about one particular visual disease.
So this is what your retinas look like when you have a thing called retinitis pigmentosa.
This here, if you go to an ophthalmologist and they look at your eye-- hopefully because you have good vision-- your eye looks nice and pretty and healthy, like this.
There's your optic nerve, that's your fovea.
This is if you have retinitis pigmentosa.
And what ends up happening is the rod cells and the cone cells that are actually the photoreceptors on your retina will basically deteriorate.
The other cells on your retina will, by and large, be in pretty good shape, but you'll actually see just like immediate deterioration of the actual thing-- the actual cells on your retina that actually transmit light to a neural signal.
So the question is, in a situation like this where you have a human who has retinitis pigmentosa, is there anything that you can do to actually help restore their sight?
And that's the question that we're going to deal with.
But then the question is, like how are we going to interface with the visual neural circuitry?
So before when we were talking about people who needed to move an artificial limb or have a sense of touch restored, the way that they interact-- or they interface with the existing circuitry was they went right into the brain.
But in this particular case here, people had a slightly different idea, which was, well, wait a minute.
It's not as if everything in the retina has been destroyed.
There's a lot of these red cells down here that go into the brain.
They seem fairly well preserved.
It's just these photoreceptors that seem to be restored.
So maybe what we can do is rather than do something invasive and put it in the brain, maybe what we could actually do is create a system that interacts directly with the remaining circuits that are left on the retina.
So let's take advantage of what's still preserved in these patients and actually interact with them rather than something as intense as going into the brain.
And so what you'll do is you'll take one of those little bit of arrays that we were talking about before, and then rather than putting it in your somatosensory cortex, what you'll actually do is you'll put it right on the person's retina.
And to get a sense of how tiny these are, this is like someone's forefinger.
It's a little bitty guy, it's like it's 1 square millimeter there.
And the way that you can think of it very crudely that it's working is kind of like these games-- you remember these games you had when you were a kid where you'd like push them?
You'd like put your face-- did everyone have these?
Does everyone know what I'm talking about?
It's a system kind of like this where you have-- you can think about it little metaphorical pushpins that you'll put on these-- imagine these have a little pushpins here.
You'll have it hooked up to some sort of camera.
And it will basically create the image in this two dimensional space that the camera is measuring as a function of the snapshot that it's taking at a particular time.
I said it's less invasive than the stuff with motor cortex, but this is still pretty invasive because this is here.
And this is where you can actually see it being directly put on the person's eye.
So part of the reason that this is so black and dark and hard to see is because these people have retinal degeneration.
But then the other thing that you can see is when you look right in their eye as an ophthalmologist, you can actually see the little circuit board that they've put right on the retina.
Yeah?
AUDIENCE: If this is all circuitry, how is it powered?
MICHAEL COHEN: You have batteries.
Like you can't really tell, but there's a system here that has a little camera on it that's like taking the snapshot.
And those little systems have battery packs that are associated with them.
But here, this is just like a little cartoon trying to show the idea.
You have a camera that will take a snapshot of, say, the letter E. It will transmit the signal to the retinal implant, which is in here.
And you can literally see that it draws on this microelectrode array, the letter E right on the existing retina.
And that even though these photoreceptors have been destroyed by the disease, you still have these other cells that it's going to stimulate in case.
Basically, you're going to say like, hey, since the rods and cones have been destroyed, let's try to put in an artificial system that effectively simulates what these rod and cones is going to do.
To give you a sense, for me, like it's still-- it's not as gnarly, but to me, this still kind of gives me the heebie-jeebies, where you can see the sort of receiver and transmitter here going right into the person's eye like that.
That's another picture.
I don't know why I did that.
But here's a little video just to give you a little bit of a sense of how this works.
MICHAEL COHEN: OK.
So again, that thing that they said at the very end, basically it uses the existing visual circuitry to transmit the information back into the brain.
We don't have to interface directly with the brain, we're just going to interface with the system that feeds into the brain.
As you can imagine, the system can be very crude.
So here, this is an attempt at like approximation of a screenshot of a person standing in front of a whiteboard.
And it's pretty hard to make this out.
One little trick, though that I've noticed, if you look at this thing here and you kind of blink pretty quickly and like maybe like move your head a little bit, you can start to kind of make out the shapes a little bit more.
Does that make it better for other people, too, or is it just me?
Yeah.
So basically, if you think about it, like you're getting these really quickly and they're changing a little bit.
You can start to make a little bit of sense of it.
But of course, as you might expect, there's so much complexity in your retina that trying to replicate it with one of these little grids that's-- I don't know-- 6 by 8, it's obviously going to have some limitations.
It's obviously going to be pretty coarse.
And I know for a fact, though, that people are trying to figure out ways and ways to get more and more electrodes onto the retina in an attempt to give you more precision because as you can see, if you go from 16 electrodes to like 1,000 electrodes, yeah, it's the difference between seeing a few white and black splotches and between seeing like, oh, it's a balding man with glasses and so forth.
And then here's one little video of what it might look like potentially.
[VIDEO PLAYBACK]
- To trying to imagine how it might look to Alan, Dr. Azzi says to picture contrasting light and dark blocks on a grid.
- But by moving his head, and using his visual memory, and all of his cognitive skills, and his remarkable capacity to get around, Mr. Zari can reconstruct a-- [END PLAYBACK]
MICHAEL COHEN: He could reconstruct the sequence.
Yeah?
AUDIENCE: I'm wondering like do you just feed like basically the raw image or do they also do some preprocessing, like some features or whatever?
MICHAEL COHEN: So in this particular one, what they do are-- with these types of systems, what they are doing is basically feeding the raw image.
But as you can imagine, that's not going to do that much.
So actually, if we had a little bit more time, I would show it to you, where people will actually try to do some computations from the camera to the actual stimulation of the retina to even more simulate the activity of what goes on in the retina, because it's not just feeding like ones and zeros of points of light.
So they are trying to do those sorts of computations here to try to just-- that's another approach besides just brute power like of adding more electrodes.
Another approach is to try to add more computational sophistication and get it a little bit more advanced.
But even in this case, though, notice it seemed like this person-- even you or I, even though we've never really looked like this-- you probably could have navigated through that world.
Like even with something that kind of looks like this, you could probably find your seat, you could probably get out, go to the bathroom, and so forth.
So even with something as crude as this, if you were blind, like this can be really life-changing because it gives you a certain amount of autonomy back.
But as you can imagine, sometimes people become blind not because they have retinal deterioration.
And what we were talking about here is some of the retinal approaches.
Sometimes people get blind because of an issue that arises at any level of the system.
You could have damage to your LGN, you could have damage to your optic nerve, you could have damage any number of sorts of ways.
And so in reality, what you also could do is besides just interacting directly with the retina, is you can actually directly integrate with the visual system.
Early on, this used to be fairly crude.
So this is actually a video of like a wire going into a person's visual system.
And these glasses that he's wearing here have the little camera on them.
And now, like I was talking about a second ago, is actually you can do this wirelessly where you can have these things implanted in your visual system chronically, just like you had with motor and somatosensory cortex, and you have a little wireless receiver that's hooked up to your glasses in the back of the head that will enable you to look at something like a tree and it will try to effectively like paint that in your visual cortex.
And the reason I say it will try to paint that in your visual cortex is because it takes advantage of the fact that there's a nice sort of like 1 to 1 correspondence in your visual system between areas of the visual world and areas on your cortical real estate.
So for example here, if these are people's brains, you can actually show stimuli like further and further out into the periphery.
So here, what this is just color-coding is like if I put something right in the center of your field of view and move it out all the way to the left, you can actually see that on your brain, there's a similar wave of activity that corresponds that's basically this 1 to 1 correspondence between the visual world and the neural world.
And you can even see over here, if you now do it sort of as like a rotating pinwheel rather than going from the middle to the periphery, you actually also can see the sort of wave happen.
So since there's this nice 1 to 1 correspondence, you can effectively draw right on the person's brain.
And in fact, people at MGH-- and trying to be-- I thought this was really clever-- actually figured out enough about how this mapping system works that they put people in an MRI machine, showed them different displays such that you could literally spell out the letters MGH on their brain.
It's not perfect, but you can get a sense of how this works.
And in that sort of system, now, you can just effectively, like I said, paint the tree right on the back of their head to provide them with some sort of crude sense of sight.
Yeah.
I'm just making one change because of time.
OK.
So unless there's any questions, I want to talk just for the last 15 minutes or so about another totally different approach to restoring vision, and that is with gene therapy.
But ignore that not optogenetics.
And specifically, it's going to be trying to restore one particular type of vision.
So it's not the entire visual system, it's going to be trying to restore color vision.
So now this is a situation-- this is an attempt to try to give you color perception back, not just like perception in general.
So as you know, as I think you probably all know, probably, you have red-- quote-unquote "red, green, and blue cones on your retina." There are-- for people who are missing one of these cones, they could be red-green colorblind-- so there's a lot of different combinations.
And the question is, well, if people are missing one of these cones-- so let's say they don't have the red one-- is there anything that can be done to actually give them their red perception back?
And it actually turns out that there is a group of monkeys that this work was initially done on that are called squirrel monkeys.
And the males are actually all missing the gene that allows them to see the color red.
The female monkeys, they see RGB just like most of us do.
The males, though, they're all color blind.
So if you actually go when you look on the male's retina what you'll actually see is that the cones there are either green or blue.
There's no red.
This is exactly, just so you know, what like dogs have.
You know how you hear when you're a little kid, all dogs are color blind?
They're color blind because they have the green and they have the blue cones on the retina but they don't have the red ones.
It's actually a little bit of a misnomer, by the way, that dogs are color.
They are not actually totally colorblind.
The world is not black and white.
And actually they can see some blues and yellows and a little bit of green because they have a couple of these receptor cells.
So for example, if you or I-- if you have normal color vision, look at a rainbow sort of thing like this, to a dog, it'll look something like this.
So you'll see a little bit-- certain shades of yellow, certain shades of blue, but you don't see reds.
And when you don't have reds, you can't mix them to get greens.
So when people talk about dogs being colorblind, this is the kind of thing that they're talking about.
And this is the kind of thing that people are going to try to fix.
The question is, can we get it so that a model organism that has this sort of color perception here, can we move it so that it has that sort of color perception up there?
So the way that you can verify that an animal is colorblind is you can give it a little tiny test.
And in the test, what you'll basically do is tell it, hey, do me a favor, touch the little part that has color on it.
So if you were the animal, you'd come up and you would like hit that, you would hit that, you would hit that one.
But if you're one of these male spider monkeys or squirrel monkeys, you get over here, you're not going to be able to do it because you can't see the red.
And you'll actually see that the animal will try to go up to the little display.
It's kind of just guessing.
It's a little bit lost.
It looks to the little food dispensers to try to get its reward.
But it's a little upset because it doesn't get anything because it won't see this red thing here.
So then the idea that these people had was like, let's not try to-- there's really nothing we can do with a brain-computer interface in a case like this.
We can't interact with the red cells.
The red cells don't even exist on the retina.
So what are we going to do?
So the idea that they had was what if we can isolate the gene that causes the females to see red, take that gene, put it in a virus, and then inject that virus into the retina directly of one of the male monkeys?
So now the idea is that what you can actually do is have a virus that will infect a subset of the cells on the male's retina.
And it'll basically infect it with a quote-unquote "disease" that makes those cells not become cancerous or any other sort of thing, but actually will make them become red-sensitive cells.
And when you look at this sort of monkey, you could see that before the treatment, his eyes look somewhat like this.
When you do it after, you could see that a lot of the cells on his retina have now been infected by this virus, a certain subset of them, and they will now be responding to the red light.
So here is a little video of this process.
[VIDEO PLAYBACK]
- So humans and other old-world primates have two genes on the X chromosomes that encode visual pigments.
One encodes the red cone pigment and the other one encodes green cone pigment.
- But if you're color blind-- - Only one type, red or green, is expressed.
- Finding that faulty gene was relatively simple.
Replacing those genes inside the cone cells with the right genes, well, that's a bit trickier.
- You have to have some way of delivering a gene to the cells that you're trying to treat and not to other cells.
- Fortunately nature has crafted a really powerful method of forcing DNA into a very specific cell, a virus.
- The virus that we use is called Adeno-Associated Virus, and people call it AAV.
It's main advantage is that you don't get an immune response against the virus.
- So while researching how to load the therapeutic gene into the virus, the Neitzes trained a pair of colorblind male squirrel monkeys, Sam and Dalton, to take a color blindness test.
- Every single morning, the monkeys wake up, and before they have breakfast, they go, OK.
It's time to have our color vision test.
And the monkey is trained to touch the place where they see that colored blob.
And then they get a treat.
They can be most efficient and gets most rewards if they're just touching their nose and then they get down and get that little treat.
- It just to be thorough-- - We also ran untreated animals.
Occasionally, they might just by chance touch the right spot.
But over trials, you know that they can only really get it right all the time if they have normal color vision.
- Once the monkeys were trained and the virus was ready, Sam and Dalton underwent a fairly elaborate procedure.
- A vitro retinal surgeon slipped the needle underneath the retina.
Then the fluid is infused in order to treat the whole entire back of the eye.
- It wasn't immediately obvious that it worked.
- And we didn't know how long it was going to take for them to change their behavior after the pigment was expressed robustly.
- Now when you look back and you see the difference between animals, so dramatic.
It's an amazing thing, and it amazed us.
- Although the FDA has yet to approve the procedure for human trials, recently the Neitzes have developed a one-shot version of the cure.
- It's like an everyday shot that would take one second, just the shot right into the eye.
- And while getting a shot in your eye sounds terrifying to some, it's a small price to pay for living out a dream, or getting to see a sunset in all its glory, or maybe just not leaving your house dressed like this.
[END PLAYBACK]
MICHAEL COHEN: God, a small pet peeve of mine-- I don't know what it is.
For any of you, if you become filmmakers and you make like science things, please don't put those little jokes at the end because they're never that funny, but they always want to end on some little quip.
Real quick, I do want to mention this because they talked about the FDA.
I read an interview from these two people who did this, the Neitzes, they're a married couple.
And they were talking about that actually after they published their first paper a couple of years ago in Nature where they explained this treatment, there was a lot of press, a lot of misrepresentation of it and saying like, hey, we can give you all your color vision back.
I'll tell you why I call it misrepresentation in one second.
And people started emailing them being like, hey, I'm colorblind.
Like I've always wanted to see color.
It's my lifelong dream, blah, blah, blah.
Could I try this?
And they're like, no.
No, no, no, no, no.
We don't have FDA approval.
We don't know the long-term repercussions.
Maybe these viruses will cause cancer or an infection.
So no, we're sorry, we can't do it.
It might be a long while.
Don't get your hopes up.
So then people respond, they're like, well, listen, listen, listen, listen, listen.
Let's say that on Tuesday night, you just happen to have left your lab unlocked.
And let's say on Tuesday night, you happen to have left a vial right there on the counter.
And let's say some person just broke in and injected it in their eye themselves.
Like that wouldn't be your fault, right?
And they're like, OK, we're going to need to get some really good locks because we get this message a lot.
It's a little creepy because people seem like they're so excited by the possibility of getting these colors back that they're claiming that they'd be willing to break in and try that on themselves.
So the one thing I want to say, though, about why I think it potentially might be a little bit of a misrepresentation is a natural question to wonder is, after these treatments-- you saw that that monkey, Dalton, was able to press the little red thing just fine.
The thing that is hard to know is, what exactly is Dalton seeing?
Because is it the case that Dalton is seeing red now exactly like you or I would see red?
Or is it possible that now Dalton's just seeing something that's different enough from the other stuff on the display so that he can do the task, get the reward, but it's not exactly-- that something is not exactly like what you were seeing.
When you were all little kids, maybe you had this discussion, or maybe I was just hyper nerdy, where it was like maybe we see different things, but we both call it red.
Like maybe when I see strawberry, it looks like this, but when you see strawberry, it looks like that, but we both call it the same thing.
And we don't know that we're having a difference.
Usually it's kind of a fun esoteric philosophy exercise, but in this case, I think there's actually something to it where it's possible that what Dalton is seeing actually is something that's greenish.
It's just that his green is getting-- or his cells are getting activated by a red stimuli, but what it'll get processed by the brain to look like, is something a little bit different?
And I spoke actually to Rosa who I think came and talked to you all.
She showed you the color blind room, is that right?
So Rosa's a color expert, I talked to her old advisor, Bevil Conway.
And they were telling me actually that what people are starting to do now with this sort of system is to try to map out behaviorally to try to get a sense of, what exactly does Dalton see?
Because you can imagine that if Dalton can't tell you the difference between these three colors here, it'd be like, oh, wait a minute if Dalton can't tell the difference between these colors, it probably means that his red is not exactly like our red.
So even though it seems as if this is restoring some type of color vision to these primates with this gene therapy, it's not 100% known whether or not they're actually able to completely restore a brand new-- or I guess not restore, but give them a brand new color that they've never seen before.
In my heart of hearts-- this will be my last thought-- I desperately want this to be true because I don't know about you all, but like I would 100% if it was safe, be like, I want to sign up.
I want to see what infrared looks like.
I want to see what X-rays look like.
Because you can imagine that if it's the case where you can give red, if you can provide a novel color to the monkey to give it a third cone, in theory, you might be able to give us a fourth cone.
And you might be able to actually expand and augment perception and actually increase the width of the rainbow that you see.
It is 12:21-- 12:22.
Does anyone have any last real questions before we call it a day?
If you don't want to ask them, I always forget that I shouldn't do that because I know that when I say class is going to be over unless people have questions, no one wants to ask the question because they want class to be over.
So what I will say instead is if you have any questions, feel free to come and talk to me afterwards.
I'll hang out for a few minutes.
Do you have anything you need to?
NANCY KANWISHER: Let's just thank Michael for an awesome lecture.
MICHAEL COHEN: All right Thanks guys.
[APPLAUSE]