Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: Potentials 2: Potentials for Organic Materials and Oxides; It's a Quantum World!
Instructor: Prof. Gerbrand Ceder, Prof. Nicola Marzari
Note: Lecture 4 was a lab session. No video is available.

Lecture 3: Potentials 2
GERBRAND CEDER: So we'll do a lab that essentially covers modeling things with potentials. You do such things like calculate a vacancy formation energy, a surface energy. So the idea will be that working with a simple energy model, first of all, you learn the mechanics of these kind of calculations, building supercells with a defect, studying convergence of them. So that's one thing.
You'll find that the hard part is often figuring out exactly what calculation to do and how to put the numbers together to get a physical quantity. And then you'll also learn something about the things we talked about here, what kind of results potentials give. For example, you [INAUDIBLE] tell you will find that the vacancy formation energy is very close to the cohesive energy, something we discussed, we talked about.
So those are the things for Thursday. And I think you'll have about two weeks to finish the assignment, although the first one is really not that intensive. You'll probably be able to do it considerably faster.
So what I want to do today is just finish up empirical energy models by focusing on organic materials and oxides. And I'll only do about half of the lecture, and then Professor Marzari will give an introduction to the next part, which is the ab initio method. So he'll start with some work, some introduction to quantum mechanics, and then lead that for the next two lectures into an initio methods for energy calculations.
OK. If you sort of remember this diagram, so what we did last time, we talked about pair potentials, discussed sort of the formal way why they fail when things are very covalent. And then I spent most of the lecture talking about pair functionals, things like the embedded atom method, the blue model. And we talked a little bit about cluster potentials.
OK. So I showed you the example of a three-body potential for silicon. Today, I want to talk about organic systems and Oxides Let me start with organic systems, where in organic systems potential modeling, or call it empirical function modeling, works extremely well. And there's a very good reason for that. And I'll come to that.
You tend to distinguish between what are called bonded and non-bonded interactions. So the bonded ones are the one along covalent bonds. So let's say you had two methane molecules interacting, so CH4 and CH4.
So the bonded interactions would, for example, be the ones here. That's the only ones. Say, the carbon-hydrogen bond would be a bonded interaction.
The non-bonded interaction are than the other ones, the ones that are not between covalent bonds. So for example, this hydrogen-hydrogen would interact through a non-bonded term. And this becomes important when you make interactions, say, between long chain molecules or polymers.
If you did sort of one polymer and another one, you would have bonded interactions along the chain. And you would have non-bonded interactions between different chain parts, between different chains, or between one part of the chain and another if it sort of were to fold on itself.
So keep that in mind, bonded and non-bonded interactions. So I wanted to do a short exercise in class making you think through the relevant potential terms you would need if you wanted to, say, study water. So here's water, H2O, one oxygen, two hydrogens.
So I also want to make you think through, so that you realize that it's really not that hard to make a sort of reasonable potential model for an organic. So let me make a wide slide. So I got to erase that.
How do we erase here? There we go. It's the very tedious way of erasing. OK.
So who wants to start? What should go in? So we have water. Think about it. Let's say I was doing water in the liquid state. So I have a bunch of water molecules. So what kind of interactions would I need? Yes.
AUDIENCE: They all have dipole.
GERBRAND CEDER: Exactly, they all have dipole. It's a good one to start with. So there's a small amount of positive charge on the hydrogens. And there's some amount of negative charge, which is 2 times the amount of positive charge on the protons. So you have a dipole.
So you have a dipole on the two. How would you model that interaction?
AUDIENCE: [INAUDIBLE]
GERBRAND CEDER: I'm sorry?
AUDIENCE: [INAUDIBLE]
GERBRAND CEDER: I'm sorry. I can't hear.
AUDIENCE: Lennard-Jones.
GERBRAND CEDER: I wouldn't do a Lennard-Jones model for the dipole because the dipole is essentially electrostatic. So there's sort of two ways you could do this. You could set up directly a dipole-dipole interaction, where you centered the dipole somewhere on the center of mass of the molecule.
Or you could treat the charged components of the dipole explicitly. So you could say, you know, I have a delta plus here, delta plus here. I have a delta minus here. And that would interact electrostatically.
And you could set up a direct electrostatic interaction. The dipole-dipole one is probably easier to implement, is also somewhat shorter range. But both would work.
If you treat the molecule as rigid, then the dipole approximation and the electrostatic one, where you take the charge individually, is essentially the same. But if the molecule, of course, can flap, then the dipole moment is changing on that molecule. So that's definitely one term. So what else would you use? [INAUDIBLE]?
AUDIENCE: OH bonding?
GERBRAND CEDER: OH bonding, OK. You guys take all the hard ones, all the hard ones first. Yeah, so hydrogen bonding-- so if there's an oxygen here and there's a water sort of here, you have hydrogen bonding. What will you use for that? What kind of form?
Any form is as good as any other one. People sometimes just use Lennard-Jones potentials for that, but you can use something else. So what else would you use? What else should go in here?
Well, if you don't assume your hydrogen molecule is rigid, then you definitely need stuff for the bonds there. So you may need an OH bond potential, which obviously would have a bond stretching part and then probably a bending part, so OH bending.
What would you make these? What kind of forms would you use? Too early in the morning for this?
I mean, you know, I would always say, before you get into complicated models, take the simplest form and see where you get. For bending, you could literally take a quadratic in the bond angle. That allows you to do reasonable stretch around the equilibrium geometry.
And a water molecule does not flop over. It's hydrogens don't really flop over that easily. Stretch, you could also, again, use just a spring constant, so r minus r0 squared.
And then if you feel you go too far away out of the equilibrium bond length, you could have more complicated potentials. But an OH bond is quite stiff, so it's not going to vibrate with an enormous amplitude. So what else will we need?
I think we're sort of missing one essential term. We have very few non-bonded terms We only have the attractive part of the non-bonded terms, the dipole-dipole and the electrostatic.
But there has to be some amount of repulsion between the hydrogens. These cannot come infinitely close together and the same for the oxygen. So you need some non-bonded van der Waals interactions.
OK. And those you could easily give a Lennard-Jones for. It seems like, when you have that, you have a somewhat reasonable approximation for water. OK. If you do all this and you parameterize it well, what are you missing?
Yeah-- which is kind of a vague terminology. But you know, I think in particular what you're missing is charge transfer, which can occur in water. You can also temporarily form H3O+, which is, I think, what's called a hydronium or something like that. Hydronium? Yeah, hydronium ion.
So the sort of more subtle electronic transfer effects you would be missing. And you'd have to do quantum mechanics for that. And even quantum mechanics, that wouldn't be all that easy to get right. OK.
You know, I plotted here the bending term for water, so you get some idea of how reasonable quadratics are. Here's the exact form, exact essentially as probably as determined from quantum mechanics. If you fit a harmonic through, so this is your quadratic.
This is k times theta minus theta 0 squared. If you see, you're pretty well within the range of the exact potential for at least 10, 15 degrees on each end of the equilibrium bond angle. And even then you're not too far off.
This is in kilocalories per mole. I would say even at this level you're kind of less than a kilocalorie per mole off. Notice that a quadratic, by its nature, it's always your potential is too weak on one side and it's too stiff on the other side. Because if the real potential is asymmetric, you'll be too stiff on one side and too weak on the other.
But with a third order polynomial, which is the green line here, you're already doing very good. So you often don't need, per se, really complicated functions of these bond angles. Because usually the more complicated your function, the more can go wrong when you fit as well.
OK. One other contribution that you often use in long molecules is portion potentials. And this is actually not a three-body effect. It's actually a four-body effect. If you think of it ethane, if you have carbons here and hydrogens here-- it's a little hard to see.
But if you look at it from the side, so there'd be three hydrogens on this end. The hydrogens can line up. So if you look at it from this direction, the carbons are on top of each other. So the hydrogen on one side are 120 degrees apart.
Now, the hydrogens from the higher carbon can either be in the same configuration-- that's called the eclipsed configuration-- or they could be here. And that's called a staggered configuration. And the staggered one is somewhat lower in energy.
So to get that energy difference right, you need, essentially, a potential that describes the torsion around the carbon-carbon bond. If you think of the three hydrogens here in sort of fixed position with respect to each other at all under 120 degrees and the ones here, to describe that the energy difference between the different rotational states you need a torsion potential. And a torsion potential, you need at least four atoms to describe it, four sets of coordinates to describe it.
Because the two carbons, say, define the bond along, which are doing the torsion. And then these vectors, say, one of the hydrogens on each side describe the state of torsion you're in. OK. So a torsion potential is, by definition, a four-body effect.
These potentials are often written as cosines of some angle. And the reason is that a torsion potential needs rotational symmetry. If you think of the ethane molecule, for example, every 120 degrees you're in the same configuration because the phase you're rotating has 120 degrees symmetry.
Does everybody see that? So you need periodicity in the angle. So that's why a cosine is very convenient. And here's the ethane rotation energy plot. So what you see is that it's minimal at 60 degrees. This is the staggered configuration.
And this is 120 and 0 and 240. These are the eclipse configurations. And so these are easy potentials to make because all you have to look at is what the symmetry is of your torsional states. So if, for some reason, we're 90 degrees, which is unusual, you'd have a cosine of 4 omega. If it's 180 degrees, you'd have a cosine of 2 omega.
The last one is maybe slightly less important, but there's something called out-of-plane or improper torsion, which is essentially also a torsion angle. But it's not as obvious. In the previous one, in ethane, the four atoms along which I described the torsion were sort of in sequence along a line, along the molecular axis. In improper torsion, they're not.
Improper torsion describes, essentially, let's say you have a plane defined by these three atoms, a, b, and c. How much A comes out of that plane, that's essentially what improper torsion describes. For example, that may be important for something like ammonia.
Let's say you have N, H, H, H. So the ammonia molecule can flop. Sort of you can have the nitrogen and three hydrogens sort of sticking out in a tetrahedral configuration.
And the nitrogen can sort of go through and come out on the other end. And so it's in the opposite state. And so it's going through an improper torsion when it does that.
So improper torsion can be important in things that are, say, sp2 hybridized because then they want to be flat. And if one atom comes out of the plane, you have an improper torsion. Or an sp3 hybridization-- where it wants to be tetrahedral from a central atom. You have three bonds going towards the tetrahedron.
And as you push that in and you squeeze that flat, you do an improper torsion on it. And there's many ways you can measure improper torsion. But it's essentially defined by the angle between the plane, defined by these three atoms and this and the distance that describes the improper torsion.
So I thought I'd show you a real example. I stole this from somebody's PhD thesis at MIT, and I forgot who. So I can't give you the reference, unfortunately, but it was somebody in our department.
OK. Here's a potential model that the person put together for poly-hydroxybenzoic acid, which I think is a component of a liquid crystal, but I don't really remember. And so essentially, there are bonded terms, which are really these here. what they call valence is the bond stretch-- sorry, the bond bending term.
So it has a form, angle minus reference angle squared. So it's harmonic in the bond bending. There's a torsion potential along the chain. And there's an improper torsion potential. And then the rest is there are non-bonded terms.
These are the ones between, say, you know, this piece of the chain kind of interacting with that piece as they approach each other. And that's a van der Waals term, which is simply model as a Lennard-Jones here and a Coulombic term because there are charged species on this chain. So everything that goes in the model this sort of stuff you've seen. And it's all fairly straightforward.
Potentials work remarkably well in organic systems. And that seems to be at odds with what I said in the beginning, that pair potentials in particular are bad for covalent materials. But you have to remember why they are bad because it's the same reason why they're so good in organics.
The reason pair potentials are bad in metals is because you cannot capture the energy dependence on coordination. OK. Remember, if your coordination stays the same, if essentially your average density around you stays the same, you're actually doing very well with pair potentials in metals.
If you remember when I showed you that in the embedded atom method you can transfer pieces of the energy between the potential and the embedding function, if you go back to that, you'll see that, if the embedding density is constant, then all you have in the energy is a pair energy. So when you're at constant density, pair potentials work fine. So why do potentials work so well in organics?
Well, the reason is that they simply take different potentials for different coordinations. Different coordinations in sort of organic chemistry are different hybridizations. And so rather than, say, have a generic carbon-carbon potential, they have a different one for sp carbon, sp2 carbon, sp3 carbon.
So essentially, they're saying, you know, we're going to take every coordination as different. And since there's only a finite set, there's essentially only a very finite set of chemical environments that your atoms see in organics.
You can simply just parameterize them all, and you have a different potential for all of them. So if your carbon is, say, in polyethylene, so it's sp3 hybridized, you'll take a very different potential than if you had an sp2 hybridized carbon. Because in one case, you want to impose 120 degree angles, in other case you want to impose 109 degree angles.
So you never, ever deal with the change of coordination of the bonded system. In the non-bonded system, you have to deal with it, but there you don't have covalent bonds anyway. So that's why it works so well.
So you can describe very well things like bending of chains, polymer chains, with potentials. And the reason is it's essentially a local effect. You're always bending the same bond. And whether that's in polyethylene or whether that's an sp3 carbon in something much more complicated, it's essentially the same bond you're bending. So you can parameterize it very well, and it's pretty much always the same, independent of what's far away from you.
But this also means that there's stuff that it will not work for. You cannot do chemistry, reaction chemistry, with potentials. Because the reason is, in reaction chemistry, you will change your coordination.
So you could not study the polymerization reaction of ethylene to polyethylene-- of ethene to polyethylene. Because there you are actually changing. Your carbon is changing from sp2 to sp3. And that would literally-- you would have to sort of, in your simulation, midstream change potential.
Because your potential is tied to a particular hybridization. You can't really do bond breaking reactions. And the other thing, which is not fundamental-- that a lot of these potentials lack polarization.
In the end, why does on sp3 carbon and another one don't always behave the same? It's because there can be charges that polarize their electron density, but these are fairly minor effects. But that's something you will not get in these kind of schemes.
You know, because there's only a sort of finite set of chemistries in organic chemistry or bonding environments, making potentials for organic system is a cottage industry. They tend to call them force fields in organics, but there really are potentials. And so there are a whole sets of parameterizations that are available.
And a lot of sort of good codes will have many of these built in already. For example, there's somewhere about [INAUDIBLE], then there's the AMBER force field, which will-- you know, what they mean with a force field is they will have potentials for the most common bonds that you find, in this case, in biology, but in general in organic chemistry. The CHARMM [INAUDIBLE] very well, very often used one.
CHARMM has essentially potentials for almost all elements, almost all light elements. So all these have been parameterized for you. There's an interesting discussion about all these force fields on this website if you care to look at it at some point.
OK. So another field where potentials have worked reasonably well as in oxides. Actually, oxides is probably one of the earliest material classes in which there was very extensive atomistic modeling going on. This goes way back to the '60s before there were barely any usable computers.
People were modeling oxides. And of course, people weren't even thinking of doing quantum mechanics computationally at the time. So they were using empirical potential models.
And the drive to work on oxides was actually from the nuclear industry. There was one oxide in particular they cared about. And that was uranium oxide because it's an essential component of your nuclear fuel.
And this was one of these early realizations that there's some stuff that was much easier to do with modeling than to do experimentally. Doing experiments on nuclear fuel rods is really kind of painful and expensive. And so that's actually how the modeling industry on oxide was born in the '60s is to actually model point defects in uranium oxide.
So it's a well-established field. It's probably getting superseded largely by doing quantum mechanics now, but it's still sort of interesting to see how reasonably well it works and what it doesn't work for. The standard, by all means, is that you use a repulsive Buckingham term.
So you have a potential that-- so you have the repulsive exponential. You have the van der Waals term. That's your short-range potential.
And then you do electrostatics because, of course, you have charged ions when doing that. And so the electrostatics give you your cohesion. That's the one that wants to bring the ions together. And the Buckingham term gives you your repulsion.
To sum the long range electrostatic part, people can sum in real space. If you sum this in real space, it's only conditionally convergent. And people use what's called the Ewald method. It's something you'll see when you use codes. There actually are faster techniques now, but this is still definitely a standard.
We're not going to go into details, but essentially the Ewald way is a way of partitioning your energy sum in real space and in reciprocal space, so that both converge. You can actually show that, in either space, it doesn't converge very well. In real space, 1 over r doesn't converge.
In reciprocal space, it really doesn't converge when you Fourier transform it. But you can partition it, so that it converges in both space. And you basically have to sum it up.
And an essential piece that was added in the late '60, early '70s was polarization. An important thing in oxides is that you have strong electrostatic fields. So when you put an electron density in a field, it can polarize. And this is particularly important for the oxygen ion.
If you put the oxygen ion in a non-symmetric position, so one where the field doesn't vanish, an oxygen ion is highly negatively charged. It means it has a lot of electrons. So the electron density cloud will polarize. And the oxygen will actually carry a dipole.
And that was actually modeled in a sort of very ingenious way, I thought. The way polarization is included is by what's called a shell and spring model. Essentially, you treat the polarizable ion by two entities.
It has a nucleus with a charge on it, and it has a shell with a charge on it. And the two charges sum to the charge of the ion. So if this were oxygen, this would be 2 minus then.
But what you do is the two can move independently, but they are connected by a spring. So they can't completely wander off, so they are connected.
But what happens now if you put-- say, if you put a field on this, the positive and the negative charge will move in opposite direction. So you may go to a state where the center, qn, the center of the positive charge is here. And that's the shell.
And so, now, this would be the center of the positive charge. And the center of the negative charge would be shifted. And you would have a dipole.
So does everybody see how this works? So the polarizability of the ion is essentially determined by the charge difference between the shell and the core and the spring between them. If the spring is infinitely stiff, then you cannot polarize the ion. If the spring is soft, then you can polarize the ion very well.
And that's actually how the spring constant is fitted. It's fitted to the polarizability, often, of the free ion. And I think this is actually the polarization. Typically, what interacts with the other ions through the short range term is the shell model.
Usually, when you model oxides with potentials, you tend to put a shell and spring only on the anions because it's obviously the anion that's most polarizable. It has the biggest electron density cloud. It has the electrons. And cations are much less polarizable because they're cations. They have lost most of their valence electrons.
Especially something like-- let's say you do MgO, magnesium oxide. Magnesium is 2+, so magnesium is an outlier. So it's a bare core. If you strip off two electrons, there's nothing left to polarize unless you think you're going to polarize the core.
So you would there just put a shell on a spring on the oxygen. This becomes very important when you work in low symmetry environments. Because whether your ion polarizes or not depends on whether the symmetry allows it to polarize.
If you are sitting in a center of point symmetry, let's say you're sitting in an inversion center of a crystal, I don't care how polarizable you are. The symmetry doesn't allow you to polarize. But so it's when you do low symmetry calculation, defects, surfaces, that polarizability becomes quite important.
So I wanted to show you a real example. This is something we kind of quickly calculated ourselves a few years ago. That's why it looks like a crummy graph.
Because this is the phonon density of state for magnesium oxide with and without polarization. So what this is is really this axis is the frequency of the phonons. And this is the density of states, how many phonons there are with that frequency of energy.
And essentially, what you see is that, when you have no polarization, you have a high frequency tail of phonons. And remember, phonons-- even though MgO is a highly symmetric crystal, the energy of phonons is determined by low symmetry things. Because, you know, think of a displacement of a phonon.
You're now breaking the translational symmetry. So you're not into high symmetry 0 Kelvin crystalline environment anymore, so polarization becomes relevant. When you turn on polarization of the anions, you essentially dampen away those high frequencies. OK.
Let me actually, just before I go here, come back to the issue of polarization is important to take into account when you fit. There are people who will show you potentials fit to the equation of state. So does everybody know what I mean?
The equation of state, I mean literally the energy of the system as a function of state lattice parameter or volume. So let's say I took MgO, magnesium oxides or rock salt. I could really, let's say, have initial calculations of different lattice parameters. And that would give me a curve, energy versus lattice parameters.
I could fit my potential to that. If I do that, I have never included any polarization effects. Because at every lattice parameter, that is still a high symmetry state.
At none of these lattice parameters is any of the ions ever polarized, ever have a dipole moment on it. And so what you see then, when you go and do defect calculations with that potential, you will almost always overestimate their energy. And the reason is you've taken away a degree of freedom of the system. If you actually then calculate a defect, the system wants to polarize, but you haven't given it the option because you haven't included it in your model.
So I would say be aware of people who show you equations of state. You know, everybody can fit an equation of state, everybody. You know, any potential that has more than one fitting parameter can fit an equation of state. It's the low symmetry details, surface energies, defects, that are hard to reproduce.
These days, just like in organics, there are very good sources of potentials for oxides. You don't really have to go and fit your own. This is one paper that sort of collected a bunch of them, which is a paper by Bush.
They're actually on the web now. You can literally download them in the format right away that's relevant for a particular code, the set of standardized codes. The one we'll use in the lab is called GULP, which is free for academics. But even other codes you can pretty much get the form often right off the web.
OK. Oh, this is actually one of the tables in that paper. All I'm showing you this is that you can see that they actually have the parameters. So this is the parameter in front of the exponential between the cation and the anion.
This is the one in the exponential. Remember, it's a exponential minus r overall. This would be the polarization, the core, and the shell polarization.
In this case, on some of the cations, they put it on as well. On the alkalis, they didn't fit a polarization, since they're really not very polarizable if you see lithium, sodium, potassium. But on some of the heavier cations, they did fit polarization terms.
But the main important polarization is the one on the oxygen. So you have a core charge and a shell charge and a spring constant. This is actually the anion spring constant.
The nice thing about these potentials is that they were fit all together. They actually fitted all these potentials, the cation, anion, and the anion-ion terms all in one massive fitting. And what that means is that they have a consistent anion-anion potential across multiple oxides.
The problem with fitting potentials in oxide has always been the oxygen-oxygen one. Because in an oxide, the oxygens are big. They're about 1.4 angstrom. A cation is small. Like, magnesium is, say, 0.7 angstrom.
So really an oxide is big oxygens touching each other and the cations sitting in the spaces between them. So the potential between the oxygens is always the one that kind of sets a lot of your dynamical effects, a lot of your lattice parameter effects. So it's the one that's hardest to deal with.
It's also the problem. If there's a reason that this sometimes fails, it's often because of the anion potential. Even though we think of oxygen as a 2- ion, it turns out it's actually a very fluffy ion.
That last p electron you put on it to make it 2- is actually not stably bound in a vacuum. It's bound largely by the electrostatic field in the crystal. Remember, from the cations comes a positive field. So the electron really wants to be there on the oxygen. And that's what binds the last p electron.
So the problem with the oxygen is that it relaxes its wave functions depending on environment. So it's not a hard ion at all. And that's often referred to as breeding. The oxygen breeds.
So what that really means-- that the oxygen doesn't have the same size in different environments. And this is where it gets problematic for potentials because you model your potential with, remember, this Buckingham potential, which has a well-defined minimum. It sort of treats the oxygen as the same every time, but this breeding basically messes that up somewhat, that your oxygen has somewhat different sides in different environments.
If you're doing monoxides, you pick that up in that difference in the cation anion potential. If you're just comparing, say, MgO and CaO, magnesium oxide and calcium oxide, you can pick up the inconsistency between the oxygen-oxygen potential in the two by modifying the metal oxygen potential somewhat.
But it's going to do complicated oxides with a lot of metals. And at the same time you quite can't play that game anymore. This is the charge density of the oxygen in calcium oxide. And on the same scale, this is the charge density in magnesium oxide.
If you actually see, it's not quite the same. It's actually not nearly as spherical in calcium oxide as in magnesium oxide. And calcium is bigger. And these are in exactly the same structure.
The last one is really the more problematic one, is that most oxides are actually-- more and more, we find out that they are less hard in valence than we thought they were. We always think of oxides as there's 2- on the oxygens. And then I figured out what my cation charge is.
And for lithium, it's 1+. For magnesium, it's 2+. What we find more and more is that the ions really don't have full charge in oxides. So what I showed you here, this is mixtures of zirconium oxide and calcium oxide.
And what is shown, the negative numbers that are shown, are the charges on the oxygen. And what you see is that they're kind of all over the map. They depend, essentially, on what cations are around them. Zirconium is 4+. Calcium is 2+.
So if you actually have a lot of zirconium around, you tend to make the oxygen more negative than if you have calcium around you. And so what this essentially means is that, when you do empirical potential modeling and you use full charges, in some sense, your energetics is too harsh. You tend to overestimate energies because you have too large an electrostatic contribution.
In reality, when you treat the full electron density in quantum mechanics, the electron density will relax away a little from these full charges and, thereby, sort of moderate the electrostatic effect. So this is probably the hardest problem in modeling oxides of empirical potentials. This becomes extremely problematic when you do transition metal oxides because transition metal have variable valence.
Magnesium, at least, we always think of as 2+. But what if you're doing iron oxides? Iron can be 2+, 3+, or 4+. And at some point, you will do conformation changes in your material, and your valence will change.
And you can't deal with that in a pair potential because it's the same as in organics. A 2+ and a 3+ iron have a different potential with oxygen because they have a different repulsion. And so there are people who have tried to make models with variable charge where you have literally the short range repulsion, the electrostatic term, and then a kind of ionization term.
So you would actually minimize not just over atomic positions, but also over charges on the ions. So there would be some function that tells you how hard it is to pull an electron off and put it on another. It's just that after a while you run into so many fitting problems.
The more parameters you have in your model, the more things you have to fit that you have to wonder whether it's worth doing that. And these models have not sort of caught on in general. If you're interested, I can give you some references, but they're not quite in widespread use.
So I kind of talked about this here, the limitations of pair potentials in oxides, the oxygen change, the charge effect, and so many body effects. So let me sort of summarize this, and then I'll hand over Professor Marzari what I think we've seen.
So in metals, you have the issue of coordination. If you should remember one thing, it's bond strength depends on coordination. And you should remember how that showed up in things like vacancy formation energies and things like surface contraction. When you make a surface, the layers tend to contract in. So if you have less bonds, they're stronger.
And that's the problem for pair potentials. And methods like embedded atom method do a great job of fixing that problem. They're still missing a lot of things. They don't have, in the end, subtle electronic hybridization effects. So you still won't get everything out of them, but they fixed that main problem.
In organic molecules, the reason you're saved is because essentially you have a limited set of environments. And you really parameterize for each of them. And people have essentially done that for you. The work is done.
In oxides, I would say they're sort of in between. You do reasonably well for things that are sort of gross topological changes. But because of the variable charge effect, the oxygen breeding, and sometimes charge transfer, you will miss that subtle energetic effect.
The other thing is that, in transition metals, you often have magnetic moments. In transition metals, you have a lot of unpaired electrons because the orbitals are very localized. So you will tend to populate them following Hund's rules.
So you will tend to build up magnetic moments on the ions. And so you will have magnetic interactions. And these are almost never dealt with in pair potentials. So I'm going to hand off here to Professor Marzari, who's going to teach you quantum mechanics.
NICOLA MARZARI: Excellent. So welcome to the second part of the lecture. What we are going to see in the next few classes are actually electronic structure methods for modeling materials. And what I've decided to do today is really just a brief introduction and a refresher of quantum mechanics or just an introduction if you haven't seen before.
And this is mostly because, at the end, electronic structure methods involves the solution of a Schrodinger equation or a relative of the Schrodinger equation. And so I wanted to remind you about that. This is sort of what we can do more or less easily nowadays.
This is a system with a few hundred atoms. It's actually myoglobin with a carbon monoxide molecules attached to it. And it's the kind of calculation that would take a day or so on a smaller research cluster.
It's also a calculation that basically predicts that we all would be dead in a nanosecond or so, just because the error in the binding energy of that molecule that you get with the standard quantum mechanical method is actually not enough to sort of compare with experiments. So let's see a few cases in sort of which we think quantum mechanical simulations are very relevant. The first case is one of structure and bonding.
Basically, a solid or a molecule stays together because there is an electronic glue that compensate the repulsion between the nuclei. And the structure and stability of different structures is determined by the compensation of repulsive effects. So all nuclear repel each other. All electrons repel each other, but nuclei and electrons attract each other.
And say, here we are seeing two different structures. On the left is the high temperature cubic phase of lead titanite. And on the right, we have the low temperature ferroelectric phase of lead titanite.
And you see it has very subtle differences and very subtle differences in the charge density. At the end, a solid is really made by spherical atoms that sort of interfere very lightly, destructively, or constructively. And here you can see sort of affirmation of some kind of stronger bond between the white oxygen, in this case, and the green lead atoms.
And we are at the stage in which sort of state of the art electronic structure approaches are actually able to calculate accurately these energy differences. And so they can give you the zero temperature phase stability. And as you learn in the rest of the class, once you have a good energetic model, you can also calculate the thermodynamics. Basically, thermodynamics is the statistical mechanics of the thermal excitation for such a system.
Another case in which electronic structure methods are obviously important is when you are actually interested in electronic structure properties. So if you want to calculate or predict electronic properties, optical properties, and, to a large extent, magnetic properties, you really can't use a potential. A potential, if anything, gives you a structure, but will never give you the electronic excitation.
And this is actually a beautiful case from recent research. This is from Professor Bawendi in the chemistry department. And what you are seeing here is actually a suspension of cadmium selenides nanoparticles.
And to a large extent, the optical excitation can actually be modeled just by the excitation of an electron confined in a spherical potential. Also, one can do sort of better than that. But what's happening here is that, depending on the size, the spacing between the excited level changes. And basically, this system absorbs light in different ways. And so they can actually make for great chemical sensors.
And this is, again, electronic properties is what we predict with quantum mechanics. And finally, we can follow chemical bonding, chemical breaking and chemical forming reactions. So say you want to study a chemical reaction. And this is a sort of classic case for organic chemistry. This is a cycloaddition reaction or what is called a Diels-Alder reaction in which, basically, an ethylene molecule break apart, and we form an aromatic bond.
Well, this is something that you could do with quantum mechanics. This is a picture that comes from Bill Goddard's site in Cal Tech. And you see we are actually following the highest occupied molecular orbital as the reaction proceeds.
And this is really the reason why a potential wouldn't be able to describe this to a large extent because what is taking place is really a reorganization of the electronic state. You see, ultimately, it's really constructive and destructive interference of electrons as standing waves in this potential that determine the stability. And again, this is sort of done with sort of current state of the art electronics structure methods.
So this is what we are going to learn, basically. And a picture that I always want you to have in mind is what I call the standard model of matter. That is everything you look at, a beta solid, a gas, or a molecule, ultimately can be described as sort of the current condition, so at the sort of average energy that is typical of room temperature or sort of roughly orders of magnitude above and below it.
First of all, by ionic nuclei, that is really where all the mass of your system is concentrated. Remember, a sort of nucleus as actually a typical size of 10 to the minus 14, 10 to the minus 15 meters. But it's really where all the mass lies.
And the amazing thing-- I always find this amazing-- is that to a large extent, even for a nucleus, Newton's equation of motion still apply. I mean, in reality, there are sort of a number of subtle quantum effects. We'll mention this sort of in the course of this class.
But for all practical purposes, you should start thinking as this nuclei as behaving as classical particles and moving around following the Newton's equation of motion. So they have a mass, and they feel a force. And the force that they feel is basically a repulsive force from all the other nuclei and the attractive force from this electronic glue.
These nuclei are actually surrounded by electrons. When you study the periodic table, you have studied all the orbital shells. And really what happens is that all the lowest core electrons are so tightly bound that they really do not sort of change in the process of the forming of a chemical bond or the breaking of a chemical bond.
So say, if you are thinking at any sort of second row element, carbon, nitrogen, oxygen, well, we know that there are 1s electrons in there, sort of the one that we have first filled. Those 1s electrons are core electrons. And they really do not change in the evolution of this system, unless we are really at energy so high that we can start sort of kicking those around, but that doesn't happen for practical purposes.
So again, in your standard model of matter, you need to think the nucleus surrounded by very tightly bound shell of electrons. And it's really only the outer electrons, what we call the valence electrons, that are those are so lightly bound to the nucleus that can start creating constructive and destructive interference when they fill another nucleus. And so they create chemical bonds.
See, if you just look at the, say, charge density in something like platinum, this is sort of a platinum surface to which a methane molecule is attached to sort of the beginning of catalysis. You see, at the end, what we have we have spherical atoms with very, very tiny bonding.
And again, it's not very different from the picture that we have seen before of the oxygen atom in calcium oxide. Obviously, all the relevant energy is in this bonding. The difference between platinum in the metal state or, say, a platinum atom solvated in water, is the difference in this chemical bonding.
So studying what happens here is very important. But to a large extent, this sort of general picture of the charge density of the atom being spherical is conserved. It's even in a covalently bonded system. This is a metallic bonded system, platinum. Up here, we have a covalent bonded system that is methane.
And again, to a large extent, you see the spheres that are the atoms. But all what we care in understanding the stability of different structure is actually what those last valence electrons do. And obviously, those are the lastly bound one.
When you start sort of creating an atom, you have a nucleus. And you start putting electrons. You fill up sort of all the levels.
The first one are incredibly tightly bound to the nucleus. And then the more you add, the more these electrons screen the nucleus. And so the next electrons are going to see less and less attractive charge to the point that the last electron sees really very little attractive charge. And so it's a valence electron ready to bind. Yes.
AUDIENCE: [INAUDIBLE]
NICOLA MARZARI: Very, very good-- this is the difference between if you want the classical behavior of particle and the quantum mechanical behavior of particle. Ultimately, I like to think of it as the indetermination principle. That is, if the electron were to fall on the nucleus, we would be in a situation in which both its position and its momentum are known with infinite precision.
And that is something that doesn't happen in quantum mechanics. And we'll see it in a different way. Ultimately, electrons behave as wave. And so the ground state of that wave is something in which the electron can't live on the nucleus.
If you want another sort of analogy, could be one of an organ pipe. If you have an organ pipe, there is a fundamental harmonic that you can play and high harmonics. But there is nothing lower in energy or sort of deeper in bass than the fundamental harmonic.
So because, basically, electrons behave as quantum mechanical particles, that is behave as wave, their lowest energy meaningful state is one in which they do not collapse to the nucleus, but they sort of create-- and we'll see this in a moment-- sort of an appropriate interference a stable stationary state around the nucleus. And sort of, if you want, this is the difference between quantum mechanics and classical mechanics.
If an electron was a classical particle, it would collapse on the nucleus. Because it's not, it doesn't. And you know, quantum mechanics has been largely understanding what are these rules that quantum object follows that are just different. And you know, sort of you'll see some of these rules.
And there are sort of different ways in explaining this. I like to think that basically the indetermination principle. At the end, a quantum particle has a certain amount of indetermination that can actually be quantified. If that quantum particle were to collapse on the nucleus, the indetermination of the two conjugate variables, the momentum and the position, would actually be 0. And that can't happen.
You know, as much as you start, say, in classical mechanics, that a particle follows Newton's equation of motion and not something different, in quantum mechanics, you learn a set of rules that these quantum particles follow and that have been verified. And one of these rules basically says that the lowest energy state for that particle is not the one in which it collapses, but the one in which it sort of forms an appropriate interference pattern around the nucleus, but not collapse on it. So if you want, it's truly quantum mechanics at its heart.
And now, you see, if you really want to sort of recover some of the subtleties that are somehow invisible in this charge, there's the picture. If you want sort of to recover what are the tiny differences between them sort of electronic levels, well, what you could do-- this is sort of typical in a transactional simulation. You could take a charge the difference between a system in which you have a methane molecule attached to the platinum, or you don't have a methane molecule.
And there, in this difference, you start sort of seeing patterns of chemical bonding emerging. And so you see sort of places where the charge density of your system increases or decreases. And you can start seeing here actually the d orbitals of the platinum and the formation of and the breaking of the chemical bonds.
So there is this general picture of atoms and spheres, but then all what we care are those subtle effects that determine bonding. And this is actually to put some energy numbers on it. Sort of this is what relevant was.
We live at around 300 Kelvin, sort of give or take 1 order of magnitude. And the kinetic energy of an atom in a sort of ideal perfect gas is basically 0.04 electron volts. So this is sort of the average energy that is available because of temperature.
And so you can understand that it's sort of that is somehow a button limit for chemical bonds. If the chemical bonds are, say, in our system, we're going to be comparable or smaller than 0.04 electrons volts. We would actually evaporate right away and disappear.
So if we actually look at the binding energy of the hydrogen bond in a dimer between two water molecule, it's sort of roughly one order of magnitude larger. So really water at room temperature is sort of bound together. It's a liquid at 0.29 electrons volts.
And now, you start understanding the standard model of matter. If you think at what is the binding energy of an electron just to a proton-- this is really the hydrogen atom-- that binding energy is almost 2 orders of magnitude larger, 13 electron volts. And so you can think that, if we start going from hydrogen to the sort of other atoms in the periodic table, the 1s electron in the other atoms are going to be even more tightly bound. And so they will never be affected by sort of the average energy present at current temperature.
And actually, this number scales as the square of the atomic number. So those core electrons, in particular those one core electrons, are going to be terribly bound. The only chemistry that we do is with the highest electrons whose binding energy is sort of comparable, basically, to this number.
OK, I think we are particularly lucky in having sort of a number of books that are actually excellent references for electronic structure research nowadays. And I sort of mentioned here three of the ones I like more. They are all very recent.
There is the book from Richard Martin at Urbana-Champaign that I would really say it's, by now, the Bible of electronic structure methods. It's a very complete and very detailed book. So this is what you would probably want to have if you were actually to do research in electronic structure, especially seen from the point of view of solid-state or condensed matter and solids.
Another very beautiful book that is, I think, the closest in spirit to this class is the book by Mike Finnis at the University of Belfast, Interatomic Forces in Condensed Matter. It covers both quantum mechanical approaches, and it also shows how potential approaches, as those described by Professor Ceder in the previous lectures, actually emerge very naturally from quantum mechanical methods. And we'll see a little bit of this.
And the third book is by Tim Kaxiras at Harvard University. And again, it's an electronic structure book with a sort of very broad perspective on solids and condensed matter systems. So any of this is actually a very good reference.
And so what I want to do after this introduction in the next 15 minutes is give you a refresher of quantum mechanics in case you have never seen it. And we can also discuss actually some reading in case you're interested. And so this is how I do it.
If you go to London, Piccadilly Circus, well, there is the Reduced Shakespeare Company. They do in 90 minutes all Shakespeare tragedies and all Shakespeare comedies. And so here the plan is to do in 15 minutes all of quantum mechanics sort of to give you the flavor of what goes on. And you see, reducing expectation for over 20 years. I actually really like them.
OK. So this really goes to answer the question that we had before. And that's sort of, historically, sort of between the 1900s and sort of the mid-'20s, there was a sort of emerging experimental consensus on a number of phenomenon. The first one was sort of studying the properties of light in particular.
Light, ultimately, is an electromagnetic wave. And people discovered the photoelectric effect. That's actually Albert Einstein paper 100 years ago. This year is that the 100th anniversary of the annus mirabilis of Albert Einstein, in which he was basically giving an explanation of the photoelectric effect.
There are illuminating material, something like selenium. And basically, your electromagnetic radiation excite discretely electrons out of the material. And that can sort of be rationalized in terms of quanta of energy, discrete amounts of energy, what we call photons being exchanged.
So something that people used to think as continuum, as electromagnetic wave that can have, say, any, in principle, wavelength is actually sort of made up of massless particles that have a sort of discrete amount of energy. So that was one thing. The other thing that sort of was fairly exotic that Planck explained and sort of, you know, that's the birth of quantum theory, was actually looking at the emission of an incandescent body. That's what is called black body radiation.
And the spectrum could be explained in terms of excitation of a gas-like of hot particles. But again, spectrum is an electromagnetic radiation. And so there was this sort of general idea of electromagnetic waves have particle-like properties.
What was really sort of interesting was sort of the discovery that particles, like electrons, can have wave-like properties. And that actually sort of determined, ultimately, sort of the development of wave mechanics or quantum mechanics and, in particular, the Davisson-Germer discovery that electron beams can be refracted. So electrons, when you shoot electrons, they are not behaving as classical particle.
And they either sort of hit something and bounce back, or they go through. But they actually diffract and so behave as waves. And sort of these were sort of if you want one of the key issues.
And how do we summarize this? Well, this is actually sort of one of fundamental ideas, so one of the ideas that I want you to remember from quantum mechanics. This is what is called de Broglie relation. It's from 1923.
It's what I call actually the shortest PhD thesis in the world. There are only three letters in that. But sort of what it's telling us is this.
There is a fundamental constant. It's like gravitation. In gravitation, you have a fundamental constant that is the attraction between mass.
Well, there is another fundamental constant that we call the Planck constant. And here is sort of what its value is in the international system. And this is a relation that all particles, all objects, satisfy.
And it says that, if a particle has a certain momentum, that is has a certain mass time velocity-- OK, so if a particle has a certain momentum, it will also display wave-like properties. And the wavelength typical of those wave-like properties is given by lambda. So basically, you have an electron. You can sort of plug in what would be an average momentum for that electron, and you find out what is the wavelength.
What it turns out is that, if you plug here the average momentum of a valence electron, what you find out is that the wavelength that it has is sort of comparable with the interatomic distances. So electrons behave significantly as waves at the length scale that are relevant for you. Because if you have two atoms that are two angstrom far apart and the electron is a wavelength of 2 angstrom, well, then we really need to take into account the wave-like properties of electrons.
And you see, this is why actually nuclei we can get by just with classical mechanics because the mass of a proton is 2,000 times the mass of an electron. So the mass of a nucleus is at least 2,000 times larger than the mass of an electron. And you actually have sort of tens of protons and neutrons. So the typical momentum is going to be thousands, tens of thousands, larger than the momentum of the electron.
And so the typical wavelength is going to be at least a 3 or 4 orders of magnitude smaller. And so, now, the wavelength of a nucleus start to be 10 to the minus 14, 10 to the minus 15 meters, is not relevant anymore for the distance between the nuclei. So for all practical purposes, the wavelength of nuclei is so small that we can treat nuclei as a point-like particles. But the wave of electrons is comparable to the distance, and so we need to describe electrons as waves.
And this relation is actually very simple to sort of look at in practical terms. Instead of using the international system of meters and seconds and so on, we actually use a different system of unit of measure. That is what we call atomic units.
Of course, no one forces us to use the meter as a unit of measure. That was actually a byproduct of the French Revolution. So sort of a lot of people in the general arena of chemistry and solid state use different sets of units of measure. One of the most difficult is atomic units where the unit of length is actually the typical radius of 1 hydrogen atom.
So the sort of standard measure gives us 0.53 angstrom or what is called 1 Bohr for the radius of an hydrogen atom. And so this will be an atomic unit. So 1 angstrom is roughly 2 atomic units.
In this scheme, atomic unit system, the Planck constant is actually going to be 2 pi. 6.28, that's what it turns out to be when we sort of choose appropriately all the atomic units of measure. And again, the atomic unit of momentum, one is going to be the mass of an electron times the velocity of a 1s electron, again, in a hydrogen atom.
So you see the electron in the hydrogen atom basically satisfy perfectly Planck relation. Because in atomic units, it's momentum p is equal to 1. And h, the Planck constant, is equal to 2 pi. So its wavelength is going to be equal to 2 pi.
And if you think at the atom as having a radius of 1, you understand that the wavelength of 2 pi is the right wavelength to basically have along the diameter 1 single oscillation. And so we sort of start thinking at this ground state of the electron that doesn't fall on the nucleus as the fundamental harmonics for this wave in the presence of a potential.
OK. There are cases in which the quantum mechanical nature of the nuclei actually becomes relevant. There is much less modeling activity on that because it's just fairly computationally expensive. One of the cases in which it's important is when you have a very light nuclei.
So if you have a system that has a lot of hydrogen, well, it actually might become relevant. And the fact that your proton has its own wavelength, has its own delocalization, could actually become important. And it's never going to change dramatically the physics of the chemistry of your system, but it's going to introduce relevant effects.
And you know, this is actually coming from a Nature paper from 1999, a very, very expensive simulation in which both the electronics degrees of freedom were studied. And also, the quantum delocalization effects on the proton were studied, basically, for what is called a hydrated proton in water. So if you have water and you add a proton, well, the first thing that will form is a hydronium ion, H3O+.
And for 20, 30 years, people had actually discussed what was the general coordination of this proton. There was sort of a hypothesis in which people were discussing about an Eigen compound, from Eigen, the person that first suggested it, a proton bound to two water molecules basically creating an H5O 2+ system. And then there was the Zundel hypothesis in which this hydronium ion, H3O+, was surrounded by three water molecules.
And you know-- nothing better than doing an accurate simulation to find it out. And being finally a quantum mechanical simulation, the result was that it's actually the proton is in what is called a functional state. Sort of it stays half of the time in an Eigen state, that is a state predicted by Eigen, and half of the time in a Zundel state.
And you see this is actually interesting, sort of comparing the quantum mechanical prediction and the classical prediction. That is for a classical proton in a quantum mechanical field of electrons. And you see the sort of average position in the proton between two molecules are sort of more delocalized, more spread out, than in the classical case.
So that is, I think, the structure of water is interesting. There are sort of other cases in which you have a very shallow energy barrier between two states in which the quantum delocalization and the quantum tunneling could be important. And something like that happened actually in strontium titanite. That is a perovskite very similar to the lead titanite that we have seen in the first slide.
And we remember there were two phases, a cubic phase and a distorted ferroelectric phase. Well, it turns out that in the case of strontium titanite, the energy barrier between those two phases is so shallow that all the ions actually tunnel back and forth between all the possible structure. And so a system that would be ferroelectric in a classical simulation of the ions, when we actually introduce a quantum mechanical effect for the nucleus as well, it turns out to be the state that keeps tunneling between all the possibilities.
And so it doesn't display ferroelectricity. The last case that is probably the most relevant-- and we'll see that in more detail, so I haven't presented here-- in which we really need to take into account the quantum mechanical nature of the ions has to do with structural excitation. When you put temperature, say, in something like a solid, you don't excite vibrational states in a continuum way. But you actually excite discrete vibrational states of your system.
And this is sort of fundamental to understand many thermodynamical properties of solid, of molecules, something like the specific heat of a solid. But we'll see examples of that later on. And those are cases in which we actually know very well on how to deal with this specific problem of the quantization of the vibrational excitation of your system.
OK. So with all of this, the first thing to remember is that electrons actually behave as waves. So in reality, what is quantum mechanics? It's nothing else than the mechanics of electrons as waves.
So we don't consider it a mechanics of classical particle, but it's the mechanics of wave-like objects. And there are going to be very specific laws if you want them. There's going to be the Schrodinger equation that is the Newton equivalent of the equation of motion.
But there are also going to be very specific computational constraints on how we describe a wave. OK. What you are very familiar is sort of the mechanics of a particle. So in the mechanics of a particle, you have Newton equation of motion.
You have mass times acceleration is equal to the force. That's an ordinary second-order differential equation. That basically means that what you are going to obtain by integrating this equation of motion is actually the trajectory as a function of time and the first derivative that is the velocity.
And because its second-order differential equation, what you really need to define that is, say, something like the initial condition. You need to define where your particle is at a certain instant and what is its velocity. And once you know that and you know, say, what is the force, something like the gravitational force in this case, well, you are able to integrate your trajectory.
OK. So this is sort of integration of the equation of motion. But what is really important is that your dynamical system that is your particle moving around is perfectly described. So we know everything about that particle. If I have every instant in time, I give you two vectors, basically. I give the position, and I give the velocity.
And in a sort of three dimension, that's six numbers. OK. So it's very easy to remember. And it's very easy to store in a computer.
And this is my last slide for today. The amazing computational complexity that we have when we actually need to describe a wave is that to describe a wave that is basically a delocalized excitation a wave is an amplitude everywhere in space. So even to describe a single electron, what we need to say is that at every instant what is the amplitude of that wave everywhere in space.
And so you see we have gone from a classical particle that was basically six numbers at every instant to a quantum mechanical particle that at every instant, in principle, requires a field that covers all the space. So we need to specify the amplitude everywhere in space. And this becomes worse and worse as the number of particle increases.
Suppose that we had two electrons. Well, what we need to give is now an amplitude at a certain instant of time that would be a function of one vector r and another vector r2.
So if, by any chance, you were satisfied by sort of describing this in a discretized form-- you see sort of discretizing space and giving the amplitude at every point in space. And say to describe an electron, you could be happy by just giving, say, a million points. Well, if you want to describe two electrons, you have to give 1 million times 1 million points. And three electrons-- 1 million to the cube and so on and so forth.
So just the computational complexity of this object explodes. That is ultimately why it is so difficult to solve the equivalent of the Newton's equation of motion for a quantum particle that is the Schrodinger equation. And so with this, I'll conclude that, if you need any reading on sort of fundamental quantum mechanics, I can give you a suggestion.
In general, sort of physical chemistry textbooks do a very good job. There is a quantum mechanical book that is called the Bransden and Joachain Quantum Mechanics that is very good. But what you need for this class you'll sort of see in the next few slides and in the beginning of next lecture.
And then really we'll sort of start describing electronic structure methods. That is, how do we deal with this complexity? And how do we apply it to practical system?
And so with this appointment on Thursday-- same time, but in room 1115. And then we meet again for class here next Tuesday.